ZFS - Wichtige Befehle
-
zpool - configure ZFS storage pools
Ein paar der wichtigsten Befehle, die ich so die letzten Tage kennengelernt und benutzt habe. Wird bei Bedarf erweitert.
zpool create
Hiermit bauen wir einen ZFS Pool aus zwei Platten auf.
zpool create ZFS-Pool ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K6XD2C26 ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K5PPSH52
zpool status
Zeigt uns den Status des Pools an, welche Platten, Zustand usw.
root@pve:~# zpool status pool: ZFS-Pool state: ONLINE scan: resilvered 1.20T in 02:56:31 with 0 errors on Mon Oct 18 18:28:36 2021 config: NAME STATE READ WRITE CKSUM ZFS-Pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K6XD2C26 ONLINE 0 0 0 ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K5PPSH52 ONLINE 0 0 0 errors: No known data errors
zpool attach
Hiermit fügen wir eine oder mehrere Platten zu einem Pool hinzu.
zpool attach ZFS-Pool ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K6XD2C26 ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K5PPSH52
zpool detach
Das entfernt eine Platte aus dem Pool.
zpool detach ZFS-Pool 8992518921607088473
zpool replace
Eine Platte, die z.B. ausgefallen ist, wird hiermit ersetzt.
zpool replace ZFS-Pool ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K5PPSH52
zfs - configures ZFS file systems
zfs list
Listet den Inhalt des ZFS Filesystemes.
root@pve:~# zfs list -t filesystem -o name,used NAME USED ZFS-Pool 418G ZFS-Pool/nas_backup 384G
oder
root@pve:~# zfs list NAME USED AVAIL REFER MOUNTPOINT ZFS-Pool 418G 3.10T 1.50G /ZFS-Pool ZFS-Pool/nas_backup 384G 3.10T 384G /ZFS-Pool/nas_backup ZFS-Pool/vm-100-disk-0 33.0G 3.13T 3.17G -
oder
gibt alle Parameter aus, was für Experten
root@pve:~# zfs get all ZFS-Pool NAME PROPERTY VALUE SOURCE ZFS-Pool type filesystem - ZFS-Pool creation Sat Oct 16 10:50 2021 - ZFS-Pool used 418G - ZFS-Pool available 3.10T - ZFS-Pool referenced 1.50G - ZFS-Pool compressratio 1.01x - ZFS-Pool mounted yes - ZFS-Pool quota none default ZFS-Pool reservation none default ZFS-Pool recordsize 128K default ZFS-Pool mountpoint /ZFS-Pool default ZFS-Pool sharenfs off default ZFS-Pool checksum on default ZFS-Pool compression on local ZFS-Pool atime on default ZFS-Pool devices on default ZFS-Pool exec on default ZFS-Pool setuid on default ZFS-Pool readonly off default ZFS-Pool zoned off default ZFS-Pool snapdir hidden default ZFS-Pool aclmode discard default ZFS-Pool aclinherit restricted default ZFS-Pool createtxg 1 - ZFS-Pool canmount on default ZFS-Pool xattr on default ZFS-Pool copies 1 default ZFS-Pool version 5 - ZFS-Pool utf8only off - ZFS-Pool normalization none - ZFS-Pool casesensitivity sensitive - ZFS-Pool vscan off default ZFS-Pool nbmand off default ZFS-Pool sharesmb off default ZFS-Pool refquota none default ZFS-Pool refreservation none default ZFS-Pool guid 16136524096267552939 - ZFS-Pool primarycache all default ZFS-Pool secondarycache all default ZFS-Pool usedbysnapshots 0B - ZFS-Pool usedbydataset 1.50G - ZFS-Pool usedbychildren 417G - ZFS-Pool usedbyrefreservation 0B - ZFS-Pool logbias latency default ZFS-Pool objsetid 54 - ZFS-Pool dedup off default ZFS-Pool mlslabel none default ZFS-Pool sync standard default ZFS-Pool dnodesize legacy default ZFS-Pool refcompressratio 1.00x - ZFS-Pool written 1.50G - ZFS-Pool logicalused 394G - ZFS-Pool logicalreferenced 1.50G - ZFS-Pool volmode default default ZFS-Pool filesystem_limit none default ZFS-Pool snapshot_limit none default ZFS-Pool filesystem_count none default ZFS-Pool snapshot_count none default ZFS-Pool snapdev hidden default ZFS-Pool acltype off default ZFS-Pool context none default ZFS-Pool fscontext none default ZFS-Pool defcontext none default ZFS-Pool rootcontext none default ZFS-Pool relatime off default ZFS-Pool redundant_metadata all default ZFS-Pool overlay on default ZFS-Pool encryption off default ZFS-Pool keylocation none default ZFS-Pool keyformat none default ZFS-Pool pbkdf2iters 0 default ZFS-Pool special_small_blocks 0 default
zfs create
Legt ein Dataset an. Das kann man sich wie einen Ordner vorstellen.
A dataset is a space where you put your data. Datasets are flexible in size and are located inside your pool.
zfs create <Pool_Name>/<dataset>
zfs snapshot
zfs snapshot datapool/test@today
zfs list
root@pbs:/mnt/datastore/datapool/test# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT datapool/test@today 0B - 96K -
Die Anleitung zum Thema -> https://openzfs.github.io/openzfs-docs/index.html
Interessante Links zum Thema
-
Unter dem Beitrag sammel ich mal ein paar Beispiele, für mich zum Nachlesen
Den Anfang macht die
ZFS-Replication
Ich hatte Am Anfang ein wenig Verständnisprobleme, bis es klar war, das diese Replication von Pool zu Pool funktioniert. Also brauchen wir zwei vorhandene ZFS-Pools.
root@pbs:/mnt/datastore/datapool/test# zfs list NAME USED AVAIL REFER MOUNTPOINT Backup_Home 222G 677G 222G /mnt/datastore/Backup_Home datapool 2.36G 1.75T 2.36G /mnt/datastore/datapool
Wir erzeugen ein Dataset im datapool
zfs create datapool/docs -o mountpoint=/docs
Wir erzeugen eine Datei mit Inhalt
echo "version 1" > /docs/data.txt
Wir erzeugen einen Snapshot
zfs snapshot datapool/docs@today
Kontrolle
root@pbs:/mnt/datastore/datapool/test# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT datapool/docs@today 0B - 96K -
Wir replizieren den vorhandenen Snapshot zum ZFS-Pool Backup_Home und speichern ihn da im Dataset test.
zfs send datapool/docs@today | zfs receive Backup_Home/test
Nun befinden sich die Daten in dem anderen ZFS-Pool
root@pbs:/mnt/datastore/datapool/test# ls /mnt/datastore/Backup_Home/test/ data.txt
Und was mich am meisten interessiert, ist wie man das zu einem anderen Server schickt
zfs send datapool/docs@today | ssh otherserver zfs receive backuppool/backup
Den Test reiche ich dann später nach.
Quelle: https://www.howtoforge.com/tutorial/how-to-use-snapshots-clones-and-replication-in-zfs-on-linux/
ZFS inkrementelle Replication
Als, nur die geänderten Daten senden!
Wir erzeugen ein paar Dateien
root@pbs:/mnt/datastore/datapool/test# echo "data" > /docs/data1.txt root@pbs:/mnt/datastore/datapool/test# echo "data" > /docs/data2.txt root@pbs:/mnt/datastore/datapool/test# echo "data" > /docs/data3.txt root@pbs:/mnt/datastore/datapool/test# echo "data" > /docs/data4.txt
Neuer Snapshot
zfs snapshot datapool/docs@17:02
Liste der Snapshots
root@pbs:/mnt/datastore/datapool/test# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT datapool/docs@today 56K - 96K - datapool/docs@17:02 0B - 112K -
Wir senden dieinkrementelle Replication
zfs send -vi datapool/docs@today datapool/docs@17:02 | zfs receive Backup_Home/test send from datapool/docs@today to datapool/docs@17:02 estimated size is 38.6K total estimated size is 38.6K cannot receive incremental stream: destination Backup_Home/test has been modified since most recent snapshot
Dazu schreibt die Anleitung, die ich unten verlinkt habe, das die Daten verändert wurden. Warum, verstehe ich aktuell noch nicht. Mit -F im send Befehl erzwingt man einen Rollback zum letzten Snapshot.
zfs send -vi datapool/docs@today datapool/docs@17:02 | zfs receive -F Backup_Home/test send from datapool/docs@today to datapool/docs@17:02 estimated size is 38.6K total estimated size is 38.6K
Und Kontrolle
ls /mnt/datastore/Backup_Home/test/ data1.txt data2.txt data3.txt data4.txt data.txt
Quelle: https://klarasystems.com/articles/introduction-to-zfs-replication/
-
Heute mal drüber gestolpert, das es auch so was geben kann.
root@pve2:~# zpool status pool: pool_NAS state: ONLINE status: Some supported and requested features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired 0B in 00:20:50 with 0 errors on Sun Apr 13 00:44:51 2025 config: NAME STATE READ WRITE CKSUM pool_NAS ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520800733 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520801376 ONLINE 0 0 0 errors: No known data errors
Was machen? Als erstes mal ein Backup angestoßen. Danach
root@pve2:~# zpool get all pool_NAS | grep feature pool_NAS feature@async_destroy enabled local pool_NAS feature@empty_bpobj active local pool_NAS feature@lz4_compress active local pool_NAS feature@multi_vdev_crash_dump enabled local pool_NAS feature@spacemap_histogram active local pool_NAS feature@enabled_txg active local pool_NAS feature@hole_birth active local pool_NAS feature@extensible_dataset active local pool_NAS feature@embedded_data active local pool_NAS feature@bookmarks enabled local pool_NAS feature@filesystem_limits enabled local pool_NAS feature@large_blocks enabled local pool_NAS feature@large_dnode enabled local pool_NAS feature@sha512 enabled local pool_NAS feature@skein enabled local pool_NAS feature@edonr enabled local pool_NAS feature@userobj_accounting active local pool_NAS feature@encryption enabled local pool_NAS feature@project_quota active local pool_NAS feature@device_removal enabled local pool_NAS feature@obsolete_counts enabled local pool_NAS feature@zpool_checkpoint enabled local pool_NAS feature@spacemap_v2 active local pool_NAS feature@allocation_classes enabled local pool_NAS feature@resilver_defer enabled local pool_NAS feature@bookmark_v2 enabled local pool_NAS feature@redaction_bookmarks enabled local pool_NAS feature@redacted_datasets enabled local pool_NAS feature@bookmark_written enabled local pool_NAS feature@log_spacemap active local pool_NAS feature@livelist enabled local pool_NAS feature@device_rebuild enabled local pool_NAS feature@zstd_compress enabled local pool_NAS feature@draid enabled local pool_NAS feature@zilsaxattr disabled local pool_NAS feature@head_errlog disabled local pool_NAS feature@blake3 disabled local pool_NAS feature@block_cloning disabled local pool_NAS feature@vdev_zaps_v2 disabled local
Das kommt von neuen Funktionen, die zu ZFS hinzugefügt wurden und bei Erstellung des Pools nicht vorhanden waren. Dann upgraden wir mal
root@pve2:~# zpool upgrade pool_NAS This system supports ZFS pool feature flags. Enabled the following features on 'pool_NAS': zilsaxattr head_errlog blake3 block_cloning vdev_zaps_v2
Kontrolle
root@pve2:~# zpool status pool: pool_NAS state: ONLINE scan: scrub repaired 0B in 00:20:50 with 0 errors on Sun Apr 13 00:44:51 2025 config: NAME STATE READ WRITE CKSUM pool_NAS ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520800733 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520801376 ONLINE 0 0 0 errors: No known data errors
Features kontrollieren
root@pve2:~# zpool get all pool_NAS | grep feature pool_NAS feature@async_destroy enabled local pool_NAS feature@empty_bpobj active local pool_NAS feature@lz4_compress active local pool_NAS feature@multi_vdev_crash_dump enabled local pool_NAS feature@spacemap_histogram active local pool_NAS feature@enabled_txg active local pool_NAS feature@hole_birth active local pool_NAS feature@extensible_dataset active local pool_NAS feature@embedded_data active local pool_NAS feature@bookmarks enabled local pool_NAS feature@filesystem_limits enabled local pool_NAS feature@large_blocks enabled local pool_NAS feature@large_dnode enabled local pool_NAS feature@sha512 enabled local pool_NAS feature@skein enabled local pool_NAS feature@edonr enabled local pool_NAS feature@userobj_accounting active local pool_NAS feature@encryption enabled local pool_NAS feature@project_quota active local pool_NAS feature@device_removal enabled local pool_NAS feature@obsolete_counts enabled local pool_NAS feature@zpool_checkpoint enabled local pool_NAS feature@spacemap_v2 active local pool_NAS feature@allocation_classes enabled local pool_NAS feature@resilver_defer enabled local pool_NAS feature@bookmark_v2 enabled local pool_NAS feature@redaction_bookmarks enabled local pool_NAS feature@redacted_datasets enabled local pool_NAS feature@bookmark_written enabled local pool_NAS feature@log_spacemap active local pool_NAS feature@livelist enabled local pool_NAS feature@device_rebuild enabled local pool_NAS feature@zstd_compress enabled local pool_NAS feature@draid enabled local pool_NAS feature@zilsaxattr enabled local pool_NAS feature@head_errlog active local pool_NAS feature@blake3 enabled local pool_NAS feature@block_cloning enabled local pool_NAS feature@vdev_zaps_v2 enabled local
So, alle neuen Features aktiviert. Jetzt kann der Pool weiterhin seine Arbeit machen.
-
-
-
-
-
ROCKPro64 - Debian Bullseye Teil 2
Verschoben ROCKPro64 -
-
Redis Replication
Angeheftet Verschoben Redis -