Jump to content

Is my idea possible regarding PCI lanes?

I have quite a bit of hardware across multiple machines and I would like to combine the hardware into one workstation.

 

My concern is whether this configuration will work regarding PCI Lanes available to the CPU and Chipset. I will start with a rundown of the configuration in my mind and then look at the configuration in relation to PCI Lanes.

 

* CPU: i9 9900k (+16 PCI Lanes)
* Motherboard: ASUS ROG Maximum IX Code (+24 PCI Lanes from Chipset)
* RAM: 64GB GSkill Trident Z
* GPU1: ASUS Strix 2080Ti
* GPU2: ASUS Strix 980
* SSD1: 1TB Samsung NVME
* SSD2: 500GB Samsung NVME
* HDD1: 8TB IronWolf NAS Drive
* HDD2: 8TB IronWolf NAS Drive
* NIC: ASUS 10Gbe (Aquantia) Card
* USB Expansion: Generic 4 Port USB 3.0 Card
* PSU: 1000W Corsair

 

I have all the hardware already, I just do not want to start tearing apart my existing machines until I am sure it will fuction.

 

From the CPU and chipset I have 40 lanes available. But from what I have read, the CPU will supply either 16x to one GPU, 8x+8x to the 2 GPUs or 8x+4x+4x to the GPU1, GPU2 (or the other SSD?) and NVME storage respectively with the chipset covering "other devices" what ever that means.

 

If the GPUs and SSDs must get their lanes from the CPU directly rather than the chipset, then this configuration will not work (Should have gone Threadripper!). 

 

The idea with this configuration is that I can run my Linux flavor of choice as the main OS, with Windows installed to the 500GB NVME SSD, which can be loaded into through a VM with the GPU, the USB expansion and the motherboard NIC passed through (have tested this on my spare machine and it works a treat!). The IronWolfs I am thinking of configuring using ZFS for redundant bulk storage, but I need to look into that more (and how to pass this to the Windows VM!).

 

I would also like the 2080ti at least to operate off a 16x lane as that will be the main workhorse in Linux. The 980 can limp on 8x if possible of the chipset for work in Windows on the TWO program suites I need it for (Autodesk and Creative Cloud!) and then 4x for each SSD, 4x for the 10GBe NIC and 4x for the USB expansion, using a total of:

 

16x for GPU1 (All 16 lanes from CPU), x8 for GPU2, x4 SSD1, x4 SSD2, x4 NIC, x4 USB Exp. (24 Lanes from Chipset).

 

To summarise my questions:

 

Is this configuration possible?

 

Is 16x required for GPU1 (workloads inc. rendering in software like Blender and simulations software). Can only find gaming comparisons which is not useful as gaming does not saturate the 8x link, but does intensive workloads?

 

If GPU1 runs at 8x with no expected loss of performance, is the configuration possible (CPU 8x+8x)?

 

I ideally do not want to buy *more* hardware, so if it is not possible, I will keep my machines separate, my desk is 50% covered in computer chassis and the other 50% in monitors!

 

Link to comment
Share on other sites

Link to post
Share on other sites

A problem you're going to run into here is IOMMU groups. Anything that goes through the chipset is going to share the same IOMMU group and everything within a given IOMMU group have to be passed through to the VM. In theory you can split them but I've been told it's a big hassle.

 

You don't want devices going off the chipset. Ideally you want all of your hardware that is to be passed through to have a direct path off the CPU.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Windows7ge said:

A problem you're going to run into here is IOMMU groups. Anything that goes through the chipset is going to share the same IOMMU group and everything within a given IOMMU group have to be passed through to the VM. In theory you can split them but I've been told it's a big hassle.

 

You don't want devices going off the chipset. Ideally you want all of your hardware that is to be passed through to have a direct path off the CPU.

With the method I am using in testing (found on levelonetechs forum), I pass individual devices. For example, passing the 980 requires passing both the GPU and the HD Audio controller. 

 

Using the IOMMU group enumeration and device hardware IDs I can force the Kernel to load the VFIO driver for any specific device for pass through. 

 

The GPU in my testing would defintely be using CPU lanes, as it would in the propsed config. The SSD and USB expansion would be from the chipset I beleive. Unsure how this would effect performance. Noticablely? If performance is better than using a Virtual Disk then I would still consider it a win. 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, NeonBlizzard said:

With the method I am using in testing (found on levelonetechs forum), I pass individual devices. For example, passing the 980 requires passing both the GPU and the HD Audio controller. 

 

Using the IOMMU group enumeration and device hardware IDs I can force the Kernel to load the VFIO driver for any specific device for pass through. 

 

The GPU in my testing would defintely be using CPU lanes, as it would in the propsed config. The SSD and USB expansion would be from the chipset I beleive. Unsure how this would effect performance. Noticablely? If performance is better than using a Virtual Disk then I would still consider it a win. 

Passing through a drive will give you better performance than a virtual SATA or iSCSI drive yes.

 

If you pass-through the GPU in the top slot you should be fine. Linux's GPU will have to deal with whatever additional latency comes with going through the chipset.

 

To answer the question about PCI_e 3.0 x8 bandwidth. The GPU shouldn't experience any bottleneck. It's more than enough for gaming and workstation use.

 

I have my own VFIO tutorial that I originally started off the L1T PopOS VFIO guide. I offered it to him as was his request if I based my tutorial off it because his was quite incomplete, but he never responded. Oh well, I fulfilled my half of the bargain. You're welcome to my version of the VFIO tutorial if you hit any roadblocks:

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Not trying to be mean or anything but PCIe, not PCI. PCI is a very old standard, no longer found on any mobo compatible with a 9900k

Link to comment
Share on other sites

Link to post
Share on other sites

ROG Maximus IX Code does not support the i9-9900K. It is a Z270 motherboard.

80+ ratings certify electrical efficiency. Not quality.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×