I am building a Proxmox server running on an SFF PC. Right now I have:
- 1 x 250 GB Kingston A400 Sata SSD
- 1 x 512 Gb Samsung NVMe 970 Evo Plus
- 1 x 512 Gb Kingston NVMe KC3000
- 1 x 12 Tb Seagate Ironwolf Re-certified disk
I plan to install Proxmox on the 250Gb Kingston disk using ext4 and use it only for Proxmox and nothing else.
I am thinking of configuring ZFS mirrored raid on the two NVMe disks. Here one disk is on my mobo, and the other is connected to the PCIe slot with an adapter, as I have only one M2 slot on the mobo. I plan to use this zpool for VMs and containers.
Finally, the re-certified 12 Tb disk is currently going through a long smarctl
test to confirm that it is usable and it is supposed to be used primarily for storing media and non-critical data and VM snapshots, which I don't care much about it. I will in parallel most likely adopt the critical data to a cloud location as an additional way to protect my most important data.
My question is should I be really concerned about the lack of DRAM in the Kingston A400 SSD and its relatively low TBW endurance (85 TB) in case I would run it only to boot Proxmox from it and I think the wear out of the drive would be negligible.
- I have the option to exchange the Proxmox boot drive with a proper SSD, like a Samsung 870 Evo (SATA SSD, using MLC NAND and having DRAM cache). I would of course need to pay around 60% more but I am just thinking that this might be an overkill.
- Do you think that using ZFS pool for the two NVMe drives will wear them out very quickly? I will have 3-4 VMs and a bunch of containers.
- Is the use of a slow Proxmox boot drive (SATA SSD) going to slow down the VMs and containers as they will run on much quicker NVMe SSDs, or it won't matter?
- Shall I format the Seagate HDD in xfs to speed up the transfer of large files or shall I stick to ext4?
- What other tests shall I run to confirm that the HDD is indeed fine and I can use it?
A proxmox root device uses barely any space. Mine usually sit around 12-14 Gb used. Writes are also negligible. DRAM less ssds are not a problem. I would suggest installing Debian first, so you can properly partition your root device before installing proxmox (https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm). You'll have much more control over your disks and networking than using proxmox's own installer. Start with a 64Gb root partition, and leave the rest of the drive empty for future use/SLC cache.
Unless your VMs are somehow high volume data writers, like a proof-of-space-coin, I wouldn't worry about it. Homelab setups rarely reach anywhere near the kind of write endurance of ssd's.
Your VMs are not going to write to the root device, so it won't matter.
You won't notice the difference in performance of a filesystem on a rotating harddisk. Look for other useful features, like at-rest-encryption, and checksumming for bitrot protection.
I would use a filesystem with checksumming, rather than relying on any point-in-time check to monitor HDD's. Assume they will all fail eventually, because they will.
I am probably going to install an arr stack on the docker containers, but they will write to the HDD. What file systems shall I use for the drives? This topic seems to be quite the rabbit hole and I simply want to properly build this system, as I am planning to leave it running in a remote location so reliability is a very important factor.
If you're already going to the trouble of setting up ZFS for the two NVMe disks, I would suggest setting up a separate pool on the HDD as well. It will save you from monitoring two different filesystem types and give all the ZFS features, checksumming, compression, snapshots, etc... Do make sure your server has a decent chunk of memory through, as your VMs will be fighting the ARC for ram...