Skip to content

FAN control OMV Auyfan 0.10.12: gitlab-ci-linux-build-184, Kernel 5.6

Linux
12 3 1.3k 1
  • Hey,

    I will do it in english, hope it is okay. Thanks, for very informative forum.

    My question is, is it possible to control the FAN in the newest OMV Auyfan 0.10.12: gitlab-ci-linux-build-184, Kernel 5.6?
    The install was very easy no problems at all, the only problem is i did try some different commands to reach the fan, but was not able to get it going. Do somebody know how? Thanks.

    Best Regards.
    Soeren

  • Hi Soeren,

    I have this succesfully running.

    Kind Regards
    Martin

  • Hi Soeren,

    I have this succesfully running.

    Kind Regards
    Martin

    @mabs sagte in FAN control OMV Auyfan 0.10.12: gitlab-ci-linux-build-184, Kernel 5.6:

    Hi Soeren,

    I have this succesfully running.

    Kind Regards
    Martin

    Thanks, i will try the fan tool again. I'm almost sure i did try without luck, have you edited something in the fan tool conf..?

  • Hi,

    I only played with two parameters.

    I have two rockpro64 boards, currently one has a small fan and the other one has a large fan.

    Therefore I changed PROFILE_NR accordingly.

    Also I have ALWAYS_ON on true on the one with the large fan, but don't see a difference. The fan goes on and off still, maybe I misinterpreted the parameter.

    Also I just noticed now that the CTL settings of the version on github changed slightly, but I think this only improved the detection depending of the kernel settings.

    M

  • @mabs

    With the new OMV kernel 5.6 image from Auyfan, i can't get the FAN to spin - If the fan spins it runs really slow, no sound from the fan.

    With the FAN tool the master installation "failed" and with the release installation it did report "active"..
    I have installed a 92mm FAN inside the NAS case, it runs on Armbian with kernel 5.4.32

    I will open op the case later today to be sure. Thanks.

  • Hi,

    Now i did find the fan, here it goes "nano /sys/devices/platform/pwm-fan/hwmon/hwmon3/pwm1" it is at "0" can be controlled to "255"

    Best Regards.

  • Yes that is basically the way ATS is doing the changing as well I think, as those /sys entries are in the _CTL variables.

    good that you got it working, or are you only half way?

    M

  • @mabs

    The tool works on all image under 5.4 or so for me. But yes i am at the finish line, just wanted the fan to be always on. Just wanted to share, if someone will run in the same problems. Thanks.

  • Helpful Thread!

    But, ATS don't work for me on kernel 5.6 with ayufan release. Only this command works.

    nano /sys/devices/platform/pwm-fan/hwmon/hwmon3/pwm1
    

    Thanks @soerenderfor for the hint.

  • Ok, problem in ATS is this?

    -- FAN Control[ String ]
    	PWM_CTL		= {
    			"/sys/class/hwmon/hwmon0/pwm1",
    			"/sys/devices/platform/pwm-fan/hwmon/hwmon0/pwm1",
    			"/sys/devices/platform/pwm-fan/hwmon/hwmon1/pwm1"
    	},
    
  • Ok, problem in ATS is this?

    -- FAN Control[ String ]
    	PWM_CTL		= {
    			"/sys/class/hwmon/hwmon0/pwm1",
    			"/sys/devices/platform/pwm-fan/hwmon/hwmon0/pwm1",
    			"/sys/devices/platform/pwm-fan/hwmon/hwmon1/pwm1"
    	},
    

    @FrankM - did you try fix the ATS tool, if yes. Will it Work?

    Best Regards.

  • Hi,

    since I'm currently change my rockpro64 setup I came across this.

    With the kernel from ayufan you need to set PWM_CTL to

    /sys/devices/platform/pwm-fan/hwmon/hwmon3/pwm1
    

    for my self compiled one I need

    /sys/devices/platform/pwm-fan/hwmon/hwmon0/pwm1
    

    But I got it only working with one entry for PWM_CTL e.g.

    PWM_CTL		= "/sys/devices/platform/pwm-fan/hwmon/hwmon0/pwm1",
    

    after that you need to start ats again

    sudo systemctl stop ats
    sudo systemctl start ats
    

    initially the fan should start immediately for a short period of time.

    In case it is even a different one on your kernel you can find the right one using this command.

    sudo find /sys -name pwm1 | grep hwmon
    

    So far I'm not sure which kernel parameter or modul changes this.

    Martin

  • OpenSource - Donations 2024

    Allgemeine Diskussionen opensource linux donations
    1
    0 Stimmen
    1 Beiträge
    295 Aufrufe
    Niemand hat geantwortet
  • Redis Stack?

    Redis redis linux
    1
    1
    0 Stimmen
    1 Beiträge
    180 Aufrufe
    Niemand hat geantwortet
  • Wichtige Links

    Angeheftet Ansible ansible linux
    1
    0 Stimmen
    1 Beiträge
    138 Aufrufe
    Niemand hat geantwortet
  • Kernel 5.19-rc1

    Quartz64 linux quartz64
    2
    0 Stimmen
    2 Beiträge
    206 Aufrufe
    FrankMF
    Man kann dann den aktuell Kernel [root@frank-pc ~]# uname -a Linux frank-pc 5.17.0-3-MANJARO-ARM-Q64 #1 SMP PREEMPT Sat Jun 4 14:34:03 UTC 2022 aarch64 GNU/Linux mit diesem Befehl aktualisieren sudo pacman -S linux-rc linux-rc-headers Man wechselt dann vom Zweig linux-quartz64 auf linux-rc. Der Zweig linux-rc entspricht dem Mainline Kernel. Achtung! Zum Zeitpunkt der Erstellung des Beitrages crasht das Eure Installation!! Ursache ist, das es aktuell diesen Kernel linux-rc-5.18.rc7-7-aarch64 installiert, dieser enthält aber keine Unterstützung für das Modell B. Und zum Nachschauen, ob schon was Neues da ist [root@frank-pc ~]# pacman -Ss linux-rc linux-rc-headers core/linux-rc-headers 5.18.rc7-7 Header files and scripts for building modules for linux kernel - AArch64 multi-platform (release candidate)
  • ZFS - Wichtige Befehle

    Linux zfs linux
    3
    0 Stimmen
    3 Beiträge
    887 Aufrufe
    FrankMF
    Heute mal drüber gestolpert, das es auch so was geben kann. root@pve2:~# zpool status pool: pool_NAS state: ONLINE status: Some supported and requested features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired 0B in 00:20:50 with 0 errors on Sun Apr 13 00:44:51 2025 config: NAME STATE READ WRITE CKSUM pool_NAS ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520800733 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520801376 ONLINE 0 0 0 errors: No known data errors Was machen? Als erstes mal ein Backup angestoßen. Danach root@pve2:~# zpool get all pool_NAS | grep feature pool_NAS feature@async_destroy enabled local pool_NAS feature@empty_bpobj active local pool_NAS feature@lz4_compress active local pool_NAS feature@multi_vdev_crash_dump enabled local pool_NAS feature@spacemap_histogram active local pool_NAS feature@enabled_txg active local pool_NAS feature@hole_birth active local pool_NAS feature@extensible_dataset active local pool_NAS feature@embedded_data active local pool_NAS feature@bookmarks enabled local pool_NAS feature@filesystem_limits enabled local pool_NAS feature@large_blocks enabled local pool_NAS feature@large_dnode enabled local pool_NAS feature@sha512 enabled local pool_NAS feature@skein enabled local pool_NAS feature@edonr enabled local pool_NAS feature@userobj_accounting active local pool_NAS feature@encryption enabled local pool_NAS feature@project_quota active local pool_NAS feature@device_removal enabled local pool_NAS feature@obsolete_counts enabled local pool_NAS feature@zpool_checkpoint enabled local pool_NAS feature@spacemap_v2 active local pool_NAS feature@allocation_classes enabled local pool_NAS feature@resilver_defer enabled local pool_NAS feature@bookmark_v2 enabled local pool_NAS feature@redaction_bookmarks enabled local pool_NAS feature@redacted_datasets enabled local pool_NAS feature@bookmark_written enabled local pool_NAS feature@log_spacemap active local pool_NAS feature@livelist enabled local pool_NAS feature@device_rebuild enabled local pool_NAS feature@zstd_compress enabled local pool_NAS feature@draid enabled local pool_NAS feature@zilsaxattr disabled local pool_NAS feature@head_errlog disabled local pool_NAS feature@blake3 disabled local pool_NAS feature@block_cloning disabled local pool_NAS feature@vdev_zaps_v2 disabled local Das kommt von neuen Funktionen, die zu ZFS hinzugefügt wurden und bei Erstellung des Pools nicht vorhanden waren. Dann upgraden wir mal root@pve2:~# zpool upgrade pool_NAS This system supports ZFS pool feature flags. Enabled the following features on 'pool_NAS': zilsaxattr head_errlog blake3 block_cloning vdev_zaps_v2 Kontrolle root@pve2:~# zpool status pool: pool_NAS state: ONLINE scan: scrub repaired 0B in 00:20:50 with 0 errors on Sun Apr 13 00:44:51 2025 config: NAME STATE READ WRITE CKSUM pool_NAS ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520800733 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520801376 ONLINE 0 0 0 errors: No known data errors Features kontrollieren root@pve2:~# zpool get all pool_NAS | grep feature pool_NAS feature@async_destroy enabled local pool_NAS feature@empty_bpobj active local pool_NAS feature@lz4_compress active local pool_NAS feature@multi_vdev_crash_dump enabled local pool_NAS feature@spacemap_histogram active local pool_NAS feature@enabled_txg active local pool_NAS feature@hole_birth active local pool_NAS feature@extensible_dataset active local pool_NAS feature@embedded_data active local pool_NAS feature@bookmarks enabled local pool_NAS feature@filesystem_limits enabled local pool_NAS feature@large_blocks enabled local pool_NAS feature@large_dnode enabled local pool_NAS feature@sha512 enabled local pool_NAS feature@skein enabled local pool_NAS feature@edonr enabled local pool_NAS feature@userobj_accounting active local pool_NAS feature@encryption enabled local pool_NAS feature@project_quota active local pool_NAS feature@device_removal enabled local pool_NAS feature@obsolete_counts enabled local pool_NAS feature@zpool_checkpoint enabled local pool_NAS feature@spacemap_v2 active local pool_NAS feature@allocation_classes enabled local pool_NAS feature@resilver_defer enabled local pool_NAS feature@bookmark_v2 enabled local pool_NAS feature@redaction_bookmarks enabled local pool_NAS feature@redacted_datasets enabled local pool_NAS feature@bookmark_written enabled local pool_NAS feature@log_spacemap active local pool_NAS feature@livelist enabled local pool_NAS feature@device_rebuild enabled local pool_NAS feature@zstd_compress enabled local pool_NAS feature@draid enabled local pool_NAS feature@zilsaxattr enabled local pool_NAS feature@head_errlog active local pool_NAS feature@blake3 enabled local pool_NAS feature@block_cloning enabled local pool_NAS feature@vdev_zaps_v2 enabled local So, alle neuen Features aktiviert. Jetzt kann der Pool weiterhin seine Arbeit machen.
  • Ubiquiti ER-X - Installation

    Verschoben OpenWRT & Ubiquiti ER-X openwrt linux er-x
    1
    1
    0 Stimmen
    1 Beiträge
    615 Aufrufe
    Niemand hat geantwortet
  • Nextcloud - Upgrade auf 19.0.1

    Nextcloud nextcloud linux
    1
    1
    0 Stimmen
    1 Beiträge
    259 Aufrufe
    Niemand hat geantwortet
  • Hetzner - Backupspace - Borgbackup

    Linux borgbackup linux
    4
    0 Stimmen
    4 Beiträge
    1k Aufrufe
    FrankMF
    Ok, da gibt es doch wohl noch ein kleines Problem Hetzner hat die Dienste migriert und ich war der Meinung, der Borg funktioniert nicht mehr. Ok, das hat er auch gemacht, aber der Grund wurde mir dann vom Support mitgeteilt, Der Backup Space ist voll. Huch, was läuft denn da falsch!? Ich konnte den Backup Space noch per SFTP erreichen, Borg gab aber immer eine merkwürdige Fehlermeldung heraus. Also aufpassen, wenn ihr mal Probleme habt, schaut mal nach ob ihr noch genug Platz habt Und jetzt muss ich das Script mal ein wenig überarbeiten, irgendwas läuft da nicht so, wie ich mir das vorstelle.