Difference between revisions of "ZFS"
Jump to navigation
Jump to search
Line 37: | Line 37: | ||
zdb -Lbbb PoolName | zdb -Lbbb PoolName | ||
− | + | ||
zpool create -f -o ashift=12 -O casesensitivity=insensitive -O normalization=formD -O compression=lz4 -O atime=off -O recordsize=256k ZfsMediaPool \ | zpool create -f -o ashift=12 -O casesensitivity=insensitive -O normalization=formD -O compression=lz4 -O atime=off -O recordsize=256k ZfsMediaPool \ | ||
raidz2 /var/run/disk/by-path/PCI0@0-SAT0@17-PRT5@5-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-SAT0@17-PRT4@4-PMP@0-@0:0 \ | raidz2 /var/run/disk/by-path/PCI0@0-SAT0@17-PRT5@5-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-SAT0@17-PRT4@4-PMP@0-@0:0 \ | ||
Line 45: | Line 45: | ||
/var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT3@3-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT28@1c-PMP@0-@0:0 \ | /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT3@3-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT28@1c-PMP@0-@0:0 \ | ||
/var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT4@4-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT29@1d-PMP@0-@0:0 | /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT4@4-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT29@1d-PMP@0-@0:0 | ||
+ | |||
+ | zpool add ZfsMediaPool log /dev/disk5s3 | ||
+ | zpool add ZfsMediaPool cache /dev/disk5s4 | ||
=== | === | ||
https://github.com/openzfs/zfs/discussions/12769 | https://github.com/openzfs/zfs/discussions/12769 |
Revision as of 16:09, 8 May 2023
Notes on ZFS
Home setup
On osx I'm running a bunch of 12tb disks in a raidz2 config. My intent is to migrate to a zpool with special devices in it.
Plan is 20 12tb disks in 2 vdev's of raidz2 with 3.2 TB SSD's in a mirror. I'll use the m2 SSD on the server for ZIL and l2arc.
This should give about 174.56 TiB of space.
Optimization
All disks should be updated
./SeaChest_Firmware_x86_64-redhat-linux --downloadFW /root/MobulaExosX12SAS-STD-5xxE-E004.LOD -d /dev/sg7
All disks should be 4k sectors. The spinning disks should be long formatted to detect bad blocks.
./SeaChest_Lite_x86_64-redhat-linux --setSectorSize 4096 --confirm this-will-erase-data -d /dev/sg8
Write cache should be enabled:
# sdparm --get=WCE /dev/sg5 /dev/sg5: SEAGATE ST12000NM0027 E004 WCE 0 [cha: y, def: 1, sav: 0] # sdparm --set=WCE --save /dev/sg5 | /dev/sg5: SEAGATE ST12000NM0027 E004 # sdparm --get=WCE --save /dev/sg5 /dev/sg5: SEAGATE ST12000NM0027 E004 WCE 1 [cha: y, def: 1, sav: 1]
ashift= 13 = 8192 byte per IO. recordsize 256K compression lz4 casesensitivity insensitive special_small_blocks 128K zdb -Lbbb PoolName zpool create -f -o ashift=12 -O casesensitivity=insensitive -O normalization=formD -O compression=lz4 -O atime=off -O recordsize=256k ZfsMediaPool \ raidz2 /var/run/disk/by-path/PCI0@0-SAT0@17-PRT5@5-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-SAT0@17-PRT4@4-PMP@0-@0:0 \ /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT31@1f-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-SAT0@17-PRT3@3-PMP@0-@0:0 \ /var/run/disk/by-path/PCI0@0-SAT0@17-PRT2@2-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-SAT0@17-PRT1@1-PMP@0-@0:0 \ /var/run/disk/by-path/PCI0@0-SAT0@17-PRT0@0-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT2@2-PMP@0-@0:0 \ /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT3@3-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT28@1c-PMP@0-@0:0 \ /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT4@4-PMP@0-@0:0 /var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT29@1d-PMP@0-@0:0 zpool add ZfsMediaPool log /dev/disk5s3 zpool add ZfsMediaPool cache /dev/disk5s4