I have quite a bit of hardware across multiple machines and I would like to combine the hardware into one workstation.
My concern is whether this configuration will work regarding PCI Lanes available to the CPU and Chipset. I will start with a rundown of the configuration in my mind and then look at the configuration in relation to PCI Lanes.
* CPU: i9 9900k (+16 PCI Lanes)
* Motherboard: ASUS ROG Maximum IX Code (+24 PCI Lanes from Chipset)
* RAM: 64GB GSkill Trident Z
* GPU1: ASUS Strix 2080Ti
* GPU2: ASUS Strix 980
* SSD1: 1TB Samsung NVME
* SSD2: 500GB Samsung NVME
* HDD1: 8TB IronWolf NAS Drive
* HDD2: 8TB IronWolf NAS Drive
* NIC: ASUS 10Gbe (Aquantia) Card
* USB Expansion: Generic 4 Port USB 3.0 Card
* PSU: 1000W Corsair
I have all the hardware already, I just do not want to start tearing apart my existing machines until I am sure it will fuction.
From the CPU and chipset I have 40 lanes available. But from what I have read, the CPU will supply either 16x to one GPU, 8x+8x to the 2 GPUs or 8x+4x+4x to the GPU1, GPU2 (or the other SSD?) and NVME storage respectively with the chipset covering "other devices" what ever that means.
If the GPUs and SSDs must get their lanes from the CPU directly rather than the chipset, then this configuration will not work (Should have gone Threadripper!).
The idea with this configuration is that I can run my Linux flavor of choice as the main OS, with Windows installed to the 500GB NVME SSD, which can be loaded into through a VM with the GPU, the USB expansion and the motherboard NIC passed through (have tested this on my spare machine and it works a treat!). The IronWolfs I am thinking of configuring using ZFS for redundant bulk storage, but I need to look into that more (and how to pass this to the Windows VM!).
I would also like the 2080ti at least to operate off a 16x lane as that will be the main workhorse in Linux. The 980 can limp on 8x if possible of the chipset for work in Windows on the TWO program suites I need it for (Autodesk and Creative Cloud!) and then 4x for each SSD, 4x for the 10GBe NIC and 4x for the USB expansion, using a total of:
16x for GPU1 (All 16 lanes from CPU), x8 for GPU2, x4 SSD1, x4 SSD2, x4 NIC, x4 USB Exp. (24 Lanes from Chipset).
To summarise my questions:
Is this configuration possible?
Is 16x required for GPU1 (workloads inc. rendering in software like Blender and simulations software). Can only find gaming comparisons which is not useful as gaming does not saturate the 8x link, but does intensive workloads?
If GPU1 runs at 8x with no expected loss of performance, is the configuration possible (CPU 8x+8x)?
I ideally do not want to buy *more* hardware, so if it is not possible, I will keep my machines separate, my desk is 50% covered in computer chassis and the other 50% in monitors!