Hypervisor 1RU 2024
This is the build about the hypervisor in 2024.
This is a 1 RU server with fast local SSD, aprox 18TB of storage.
Contents
Bios settings
A patched bios is needed for this to boot of UEFI and also enable PCI hotplug menu in the bios. I was able to edit this and have posted the latest version here.
Disk Layout
boot disk
Boot disk = 128g mirror ZFS 1 34 2047 1007.0 KiB EF02 2 2048 2099199 1024.0 MiB EF00 3 2099200 838860800 128.0 GiB BF01
ZFS storage
/data
5 vdev's mirrored of the SAS disks 2 mirror NVME 256g partition, but dedicated on the whole disk. This can grow. optional log and l2arc on the boot NVME's
Note that due to slot 8 and 9 being on their own 2 ports of 8 they will run faster than the rest and fill unequally if paired together.
zpool create data \ mirror /dev/disk/by-enclosure-slot/front-slot000 /dev/disk/by-enclosure-slot/front-slot001 \ mirror /dev/disk/by-enclosure-slot/front-slot002 /dev/disk/by-enclosure-slot/front-slot003 \ mirror /dev/disk/by-enclosure-slot/front-slot004 /dev/disk/by-enclosure-slot/front-slot005 \ mirror /dev/disk/by-enclosure-slot/front-slot006 /dev/disk/by-enclosure-slot/front-slot008 \ mirror /dev/disk/by-enclosure-slot/front-slot007 /dev/disk/by-enclosure-slot/front-slot009 \ special mirror /dev/disk/by-enclosure-slot/nvme-upper-3-part1 /dev/disk/by-enclosure-slot/nvme-lower-1-part1
Backplane
The onboard backplane is a BPN-SAS3-116A-N2 which has 8 SAS disks and then 2 NVME or SAS. Howerver this last thing is not true if you want to run 10 SAS disks. The right most NVME/SAS ports are called "SAS2" ports on the backplane, but are really SATA ports, and connected to the onboard SATA. As this is a backplane only, not an expander each physical SAS port from the controller is connected to one SAS drive. Since the included controller only had 8 ports, a 16 port controller is used.
NVME name spaces
https://narasimhan-v.github.io/2020/06/12/Managing-NVMe-Namespaces.html
The NVME come setup as 1.88T disks:
tnvmcap : 1,880,375,648,256 unvmcap : 375,648,256
I suspect this is a 2.0 TiB (2,199,023,255,552b) provisioned down in the controller or about 85%. moving this to a 1.6TB disk will under provision this and make it perform better in the event we use it as log or write intensive.
Ensure we're on 4096 bytes
nvme id-ns -H /dev/nvme0n1 | grep "LBA Format" nvme id-ns -H /dev/nvme1n1 | grep "LBA Format" nvme id-ns -H /dev/nvme2n1 | grep "LBA Format" nvme id-ns -H /dev/nvme3n1 | grep "LBA Format"
1 & 2 Detatch the name space
nvme detach-ns /dev/nvme0 -namespace-id=1 -controllers=4 nvme detach-ns /dev/nvme3 -namespace-id=1 -controllers=4
Delete the namespace
nvme delete-ns /dev/nvme0 -namespace-id=1 nvme delete-ns /dev/nvme3 -namespace-id=1
Make the new namespace
nvme create-ns /dev/nvme0 --nsze-si=1.6T --ncap-si=1.6T --flbas=0 --dps=0 --nmic=0 nvme create-ns /dev/nvme3 --nsze-si=1.6T --ncap-si=1.6T --flbas=0 --dps=0 --nmic=0
Attach the namespace to the controller
nvme attach-ns /dev/nvme0 --namespace-id=1 --controllers=4 nvme attach-ns /dev/nvme3 --namespace-id=1 --controllers=4
reset the controller to make it visable to the OS
nvme reset /dev/nvme0 nvme reset /dev/nvme3
Confirm it
nvme list Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme4n1 /dev/ng4n1 S64ANS0T515282K Samsung SSD 980 1TB 1 114.21 GB / 1.00 TB 512 B + 0 B 2B4QFXO7 /dev/nvme3n1 /dev/ng3n1 S5XANA0R537286 MZ1LB1T9HBLS-000FB 1 0.00 B / 1.60 TB 4 KiB + 0 B EDW73F2Q /dev/nvme2n1 /dev/ng2n1 S5XANA0R694994 MZ1LB1T9HBLS-000FB 1 157.62 GB / 1.88 TB 4 KiB + 0 B EDW73F2Q /dev/nvme1n1 /dev/ng1n1 S5XANA0R682634 MZ1LB1T9HBLS-000FB 1 157.47 GB / 1.88 TB 4 KiB + 0 B EDW73F2Q /dev/nvme0n1 /dev/ng0n1 S5XANA0R682645 MZ1LB1T9HBLS-000FB 1 0.00 B / 1.60 TB 4 KiB + 0 B EDW73F2Q