After two energy-efficient servers handled the basic services, I wanted a way to perform powerful virtualization of desktop operating systems, especially Windows. The idea of having multiple Windows VMs with functioning vGPUs from Nvidia in order to offer guests suitable places even without compatible Windows was very tempting.
In order to get the most out of the costs, a system with as many PCI Express lanes as possible at a low price makes the most sense. The reason for this is obvious, as everything required runs via PCI Express:
- NVMe SSDs
- Expansion cards for SATA / HBA
- 10 Gbps network card(s)
- Graphics card(s)
The only options here are Intel Xeon from the workstation level up, AMD Threadripper or AMD Epyc.
Workstation?
I wanted to avoid a used OEM workstation and go the DIY route for various reasons. For one thing, these OEM motherboards are always only “as much as necessary” and for another, many of the components used in them are no longer standard-compliant and therefore difficult to maintain/replace/retrofit. Overclocking would also not be possible, but that would not be necessary in this case anyway.
Let’s continue with the base.