Difference between revisions of "HP DL360 G7 Hypervisor"

From W9CR
Jump to navigation Jump to search
Line 1: Line 1:
[[Category:Proxmox]
+
[[Category:Proxmox]]
 +
[[Category:DataCenter]]
  
 
This describes the build process for Proxmox 8.2 on existing [https://support.hpe.com/hpesc/public/docDisplay?docId=c02206768&docLocale=en_US Compaq/HP DL360 G7 Server]
 
This describes the build process for Proxmox 8.2 on existing [https://support.hpe.com/hpesc/public/docDisplay?docId=c02206768&docLocale=en_US Compaq/HP DL360 G7 Server]
Line 12: Line 13:
 
* 192 GB (12x16gb DDR3 1600 MHz) RAM
 
* 192 GB (12x16gb DDR3 1600 MHz) RAM
 
* 8x SAS 1.6 TB SSD HUSMM141CLAR1600  
 
* 8x SAS 1.6 TB SSD HUSMM141CLAR1600  
* LSI SAS2308 storage controller
+
* LSI SAS2308 storage controller - [https://www.supermicro.com/wdl/driver/SAS/Broadcom/2308/Firmware/IT/ Firmware link]
 
* Mellanox MT27520 ConnectX-3 Pro dual 10g Nic
 
* Mellanox MT27520 ConnectX-3 Pro dual 10g Nic
  
Line 96: Line 97:
  
 
= Do the Install =
 
= Do the Install =
 +
 +
 +
'''Put the 8.2 iso on the server and connect it to the ILO CD drive'''
 +
http://keekles.org/~bryan/zfs/proxmox-ve_8.2-2.iso
 +
 +
This may take 5-10 min to load, look the the http server logs.
 +
 +
'''At the GRUB prompt you will need to add'''
 +
video=1024x768@60 nomodeset
 +
after the linux line in the gui installer
 +
 +
'e' then edit as below and hit ctrl-x
 +
<image>

Revision as of 17:58, 3 September 2024


This describes the build process for Proxmox 8.2 on existing Compaq/HP DL360 G7 Server

These have served me well for several years and while the could stand to be upgraded, they are maxed out on CPU, RAM and disk, and have plenty of life left in them. As I'd installed Proxmox years ago, and before I knew what I know now regarding the naming local storage for ZFS pools, I decided to rebuild them during the 8.2 upgrade.

Config

The Servers consist of the following:

  • 2x Xenon X5675 @3.07 GHz (12 real/24 virtual cores)
  • 192 GB (12x16gb DDR3 1600 MHz) RAM
  • 8x SAS 1.6 TB SSD HUSMM141CLAR1600
  • LSI SAS2308 storage controller - Firmware link
  • Mellanox MT27520 ConnectX-3 Pro dual 10g Nic

Disk Layout

I use local ZFS storage on most of my HVs, and as such proxmox cannot migrate or replicate VMs or CTs from nodes with different pool names. By default everything is rpool, and rpool is the root disks, which on other newer servers are used for booting only. This means that one of the core features of proxmox will not function unless it's fixed.

As I'm not wanting to invest more in these servers and figure out how to put a boot disk in, we must boot off the existing SSD's. This has the limitation of requiring 512b sector disks and GRUB as the servers are too old to use UEFI boot and native 4k disks. Proxmox does write to it's root disk so vmware kinda boot (ie USB boot or DOM SATA) will not work, but the IO requirements are not that intense. The decision was made to use part of the disks as a raidz3 pool with 16gb per disk making 80gb reserved for the root device.

The localDataStore pool will be raidz1 with the remaining space on each device. zraid1 was chosen as each VM is fully backed up and even if >1 disks fail, the server will still boot as we can lose 3 disks before it's unbootable. I've not lost a disk on these servers before either, and cold spares are on site.

We'll also pad out the space between the root pool partition and the localDataStore partitions with a 16GiByte partition, allowing easy expansion of the root pool if needed in the future. Finally we'll leave 16MiB of space at the end of each disk.

Thus we'll have the following:

Disk /dev/sda: 3125627568 sectors, 1.5 TiB
Model: HUSMM141CLAR1600
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 7E89E66A-F81F-431A-B71A-F8BA7583CDC1
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3125627534
Partitions will be aligned on 8-sector boundaries
Total free space is 32774 sectors (16.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            2047   1007.0 KiB  EF02
   2            2048         2099199   1024.0 MiB  EF00
   3         2099200        33554432   15.0 GiB    BF01
   4        33554440        67108871   16.0 GiB    BF01  Solaris /usr & Mac ZFS
   5        67108872      3125594767   1.4 TiB     BF01  Solaris /usr & Mac ZFS
# zpool list -v
NAME                               SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
localDataStore                    11.4T  2.27T  9.12T        -         -     0%    19%  1.00x    ONLINE  -
  raidz1-0                        11.4T  2.27T  9.12T        -         -     0%  19.9%      -    ONLINE
    scsi-35000cca0504cf90c-part5  1.42T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca05053df48-part5  1.42T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0504d201c-part5  1.42T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506d06e0-part5  1.42T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0504d3378-part5  1.42T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506a76bc-part5  1.42T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0504c33c8-part5  1.42T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0504d1e3c-part5  1.42T      -      -        -         -      -      -      -    ONLINE
rpool                              119G  66.4G  52.6G        -         -     1%    55%  1.00x    ONLINE  -
  raidz3-0                         119G  66.4G  52.6G        -         -     1%  55.8%      -    ONLINE
    scsi-35000cca0504cf90c-part3  15.0G      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca05053df48-part3  15.0G      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0504d201c-part3  15.0G      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506d06e0-part3  15.0G      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0504d3378-part3  15.0G      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506a76bc-part3  15.0G      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0504c33c8-part3  15.0G      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0504d1e3c-part3  15.0G      -      -        -         -      -      -      -    ONLINE

Preparation

As these are currently running in the pool, we need to:

  • Migrate VM's off of the HV
  • Ensure shutdown VMs are backuped in PBS
  • Backup the remaining rpool
  • Check that the ILO works

ZFS backup to FatTony

Backup the pve to /root/pve

cp -av /etc/pve/ /root/pve-etc 

Make a pool to store this on fattony

ssh root@192.168.8.184 zfs create testpool/`hostname`-old

Make a snapshot of the rootfs

zfs snapshot rpool/ROOT/pve-1@20240903-01

Send it to the host

zfs send rpool/ROOT/pve-1@20240903-01 |pv| ssh -c aes128-gcm@openssh.com 192.168.8.184 zfs recv -Fdu -o canmount=noauto testpool/`hostname`-old


Do the Install

Put the 8.2 iso on the server and connect it to the ILO CD drive http://keekles.org/~bryan/zfs/proxmox-ve_8.2-2.iso

This may take 5-10 min to load, look the the http server logs.

At the GRUB prompt you will need to add

video=1024x768@60 nomodeset 

after the linux line in the gui installer

'e' then edit as below and hit ctrl-x <image>