Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Ergroilnin

Additional PCIe cards

Notice that, depending on your motherboard, there may or may not be such "competition" for lanes. In typical consumer boards, you may have 2, sometimes 3, physically  x16 slots that share lanes and switch between x16 or x8+x8 (or x8+x4+x4 in the 3-slot case) depending on what you plug. As @Lurick said, such lane split will occur the moment you have something plugged in into the additional slot(s).

However, there are additional slots that are wired independently and don't trigger any lane re-config (typically x1 slots in consumer boards, but sometimes some bigger slots as well, or x16 slots in "HEDT" motherboards). In those cases there is no change in the lane setup. Furthermore, you have PCIe slots that are routed through the chipset and share a common connection to the CPU. In those cases, the lanes don't change either, and plugging stuff in itself generates no conflict, but they will "compete" for bandwidth when in use.

 

TL;DR:

- for independently wired slots, it doesn't matter

- for shared CPU lanes with switch/bridge, what matters is plugging

- for shared CPU lanes through the chipset, what matters is activity/usage

 

Recommended Posts

Posted · Original PosterOP

I want to ask if just plugging in additional PCIe cards (be it some secondary GPU, wifi, raid, whatever, does not really matter) take up PCIe lanes by default, or do these cards actually need to be used at the moment by some task to change the lanes setup?

 

I am just curious because there are few additional cards I could probably put to use in some cases, but I would not like to hinder the rest of the cards when I am not using the others. Though right now, most likely it would not even matter but still the question is the same.

Link to post
Share on other sites

They will consume lanes regardless of in use or not because they need to be initialized at boot.

Depending on the device and which slot though it's likely they will pull from the Chipset lanes and not the lanes directly connected to the CPU.


Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to post
Share on other sites
Posted · Original PosterOP
22 minutes ago, Lurick said:

They will consume lanes regardless of in use or not because they need to be initialized at boot.

Depending on the device and which slot though it's likely they will pull from the Chipset lanes and not the lanes directly connected to the CPU.

Ahh so that's how it is, the lanes are distributed while booting, okay get it. 

 

Thank you :)

Link to post
Share on other sites
Posted · Best Answer

Notice that, depending on your motherboard, there may or may not be such "competition" for lanes. In typical consumer boards, you may have 2, sometimes 3, physically  x16 slots that share lanes and switch between x16 or x8+x8 (or x8+x4+x4 in the 3-slot case) depending on what you plug. As @Lurick said, such lane split will occur the moment you have something plugged in into the additional slot(s).

However, there are additional slots that are wired independently and don't trigger any lane re-config (typically x1 slots in consumer boards, but sometimes some bigger slots as well, or x16 slots in "HEDT" motherboards). In those cases there is no change in the lane setup. Furthermore, you have PCIe slots that are routed through the chipset and share a common connection to the CPU. In those cases, the lanes don't change either, and plugging stuff in itself generates no conflict, but they will "compete" for bandwidth when in use.

 

TL;DR:

- for independently wired slots, it doesn't matter

- for shared CPU lanes with switch/bridge, what matters is plugging

- for shared CPU lanes through the chipset, what matters is activity/usage

 

Link to post
Share on other sites
Posted · Original PosterOP
1 hour ago, SpaceGhostC2C said:

Notice that, depending on your motherboard, there may or may not be such "competition" for lanes. In typical consumer boards, you may have 2, sometimes 3, physically  x16 slots that share lanes and switch between x16 or x8+x8 (or x8+x4+x4 in the 3-slot case) depending on what you plug. As @Lurick said, such lane split will occur the moment you have something plugged in into the additional slot(s).

However, there are additional slots that are wired independently and don't trigger any lane re-config (typically x1 slots in consumer boards, but sometimes some bigger slots as well, or x16 slots in "HEDT" motherboards). In those cases there is no change in the lane setup. Furthermore, you have PCIe slots that are routed through the chipset and share a common connection to the CPU. In those cases, the lanes don't change either, and plugging stuff in itself generates no conflict, but they will "compete" for bandwidth when in use.

 

TL;DR:

- for independently wired slots, it doesn't matter

- for shared CPU lanes with switch/bridge, what matters is plugging

- for shared CPU lanes through the chipset, what matters is activity/usage

 

Thank you very much for the TL;DR, not that I did not read through the reply, but this simplification made it way easier to understand what it going on with the PCIe lines.

 

Won't ever probably be my own concern but there is one more technicality that I want to ask about. I know that different CPUs provide different PCIe lane count (most obvious difference is between HEDT and normal consumer stuff) and then I guess that different motherboard also offer different count with varying chipsets not to mention having different setups in their direct CPU lanes dividing between higher and lower high numbered lanes?

 

So if I were to plan some relatively high PCIe usage, do I need to balance the CPU vs the motherboard or the direct CPU lanes are completely differentiated with the chipset ones?

 

So in no case I can actually bottleneck chipset by using too much direct lanes from PCIe slots?

Link to post
Share on other sites

There's a number of pci-e lanes offered by the CPU and there's a number of lanes offered by the chipset.

The chipset is connected to the CPU using a connection with limited bandwidth, typically 4 GB/s

 

I'll give you an example with AM4 socket from AMD, as it's simpler.

The processors have 24 pci-e lanes:

1. 4 pci-e lanes are always used to connect to chipset (4 GB/s)

2. 4 pci-e lanes are typically used for the M.2 connector (mb maker can choose to use them for something else, but most stick to M.2)

3. 16 pci-e lanes are used for pci-e x16 slot (for video card)

 

The 16 pci-e lanes for video card can be split automatically into 2 x8 slots, if the proper chipset is used (x370 or x470 chipset). With the other chipsets, splitting the x16 slot into 2 x8 is not possible.

 

The chipset creates additional pci-e lanes. The number of pci-e lanes varies with the chipset.

From memory - and I may be wrong - A320 chipset offers 4 pci-e 2.0 lanes. B450 offers 6 pci-e lanes and x470 chipset offers 8 pci-e lanes. 

Because of this low number of lanes, typically B450 boards will have a second pci-e x16 slot that actually has only 4 pci-e lanes, and 1 or 2 pci-e x1 slots.

 

Usually, the 4 GB.s of bandwidth between the CPU and chipset is hard to saturate.

For example, even if you'd transfer data from a M.2 SSD to a SATA RAID at let's say 1.5-2 GB/s and you'd capture 4K raw footage at the same time from a capture card in the pci-e x4 slot at 1-2 GB/s you still wouldn't saturate the chipset's link to the CPU.

 

 

Link to post
Share on other sites
4 hours ago, Ergroilnin said:

So in no case I can actually bottleneck chipset by using too much direct lanes from PCIe slots?

I'm not sure I understood your full question, but to this last part the answer is "No": the connection between the CPU and the chipset uses lanes exclusively assigned to that connection.

 

More generally, PCIe lane layout is mostly fixed within a motherboard, with only limited, specific variations allowed between certain slots. For example, a Ryzen CPU has 24 PCIe lanes. 4 of those go to the chipset, no matter what. 4 go to storage, typically an M.2 slot for SSDs, whether you occupy it or not. You are left with 16 lanes, that in some boards will have a bridge along the way to allow for an 8x+8x or 8x+4x+4x split, depending on what you plug (and sometimes need to configure in BIOS). Hence, the flexibility is limited to whether you split those 16x between 2-3 pre-specified slots, or leave them all to the (pre-specified) top slot. You cannot take away lanes from the chipset, nor give more to it.

The chipset works a bit differently as it centralizes all the slots connected to it and routes that to the CPU. However, the layoit coming out from the chipset is typically also fixed, meaning that every slot connected to the chipset uses a fixed number of lanes to reach it, and the chipset then passes that forward to the CPU using its 4 dedicated lanes. Therefore, while you are not reallocating lanes "downstream", between chipset and slots, how many slots are active, and how intensively they are being used, limits how much of its own 4 lanes the chipset can use to send that over to the CPU. So, if you have two M.2 PCIe x4 SSDs connected through the chipset, and use one at a time, it will get the full 4 lanes, but if you use both simultaneously, they will share the 4 lanes and effectively be as if they were x2 each. That makes the lanes coming out of the chipset seem more "fluid", even though the wiring itself is fixed. 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×