Jump to content

PCIe combiner

Go to solution Solved by mariushm,
57 minutes ago, fasauceome said:

A PCIe splitter is a simple enough concept, take one bus and make it accept multiple signals. However, is the opposite possible? Could you wire a single PCIe x16 bus into 2 x16 slots on a board? The reason I ask is because PCIe 2.0 is half the bandwidth of 3.0, and 4.0 will double that, so if you needed a bunch of bandwidth but had legacy slots, could they be addressed as one connection

 

I don't see how what you're thinking of doing would actually work.

The pci-e lanes can not be combined just like that in your motherboard, it's not like you have a 48 port ethernet switch (where each port is analogy for a x1 lane). It's more like having multiple pci-e controllers inside the chipset and in the cpu and each controller can do 16 lanes, and allows for those lanes to be arranged in multiple combinations (like 1x16, 2x8, 1x8+2x4, 1x4+4x1 etc ... you have a limited number of devices allowed per set of 16 lanes)

These controllers each with a number of lanes can't work together to mix their lanes with another controller's lanes, each controller has its own region of memory where data packets are moved and all that.

Also keep in mind that on most motherboards, you have some slots that are more special than others, as they're connected to the cpu and therefore have higher read/write speeds to computer RAM. Other pci-e slots come from the chipset, which has a communication channel with the cpu that runs at lower speed.

So for example a video card running at pci-e x16 v3.0 from cpu can read or write from cpu at  16 x ~970 MB/s = ~15.8 GB/s  but if you have the video card in a pci-e x16 v2.0 slot created by the chipset you have 16 x 500 MB = 8 GB/s between the video card and chipset, but the chipset has only 4 GB/s between itself and the cpu (therefore RAM) ... so really your video card would only read and write data from ram at 4 GB/s

Most CPUs only give 16 lanes which can be arranged by motherboard makers for video cards into x16 or 2 x 8 but that's about it. Exceptions are processors like Threadripper with 60 lanes which can be arranged in x16 slots or subsets of a x16

 

So you're making the assumption that if you have 2 x16 v2.0 slots on the motherboard, you may have 2 x 16 x 500 MB/s = 16 GB/s which is equivalent to pci-e x16 v3.0 , so you think in theory you could make a chip that would behave like a 1x16 v3.0  <---> 2x16 v2.0 converter  but you would either have those x16 v2.0 slots on separate controllers (so not possible) , or you wouldn't actually have 2x16 electrically, you would have 2x8 electrically on the moterboard.

 

The other way is easy because the chips like PLX create 32 lanes and arrange them in 2 x 16 or whatever combination desired and combined and buffer the data from all devices to push into a single x16 (or smaller channel) - the chip is dealing with a single "controller" on the motherboard, so it works.

 

PS. Another reason why it would be super hard is because pci-e 2.0 uses a particular encoding (8b/10b) for data packets and pci-e 3.0 uses another encoding (128/130b) and pci-e v4 keeps this encoding.  So any chip that converts between pci-e 3.0 and 2.0 would have to be fast enough and have enough cache memory to process 1 GB/s per lane of data packets encoded in 128b/130b and convert these in 8b/10b encoded data packets (therefore losing around 1.5% of bandwidth in overhead and do a whole load of other things, all on the fly.  Its not easy, it's a lot of transistors, it's a lot of silicon die space for something that would be very low demand so probably not worth investing hundreds of thousands of dollars in designing such chip.

A PCIe splitter is a simple enough concept, take one bus and make it accept multiple signals. However, is the opposite possible? Could you wire a single PCIe x16 bus into 2 x16 slots on a board? The reason I ask is because PCIe 2.0 is half the bandwidth of 3.0, and 4.0 will double that, so if you needed a bunch of bandwidth but had legacy slots, could they be addressed as one connection? 

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

i guess in theory it could be possible, but outside of "freak machine" builds i dont think there's any use in this, and as a result i dont think anyone would be interested in designing the silicon for it.

Link to comment
Share on other sites

Link to post
Share on other sites

Nothing even uses full 16x bandwidth...

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, manikyath said:

i guess in theory it could be possible, but outside of "freak machine" builds i dont think there's any use in this, and as a result i dont think anyone would be interested in designing the silicon for it.

If I hacked one together myself, would the motherboard know what to do with it?

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, fasauceome said:

needed a bunch of bandwidth but had legacy slots

I guess then it's time for a new system...

That's like wanting to combine multiple IDE connectors for your sata ssd, and that's just silly.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Enderman said:

Nothing even uses full 16x bandwidth...

Then say it's scaled down, double PCIe x4 for whatever reason (yes I know there's such a thing as x8). If you want to combine smaller inputs would it be possible?

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Olaf6541 said:

I guess then it's time for a new system...

That's like wanting to combine multiple IDE connectors for your sata ssd, and that's just silly.

Isn't that what raid does? At least sort of, anyway. 

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, fasauceome said:

Isn't that what raid does? At least sort of, anyway. 

Yeah on a software level to creating a single array, not actually fusing hardware together to form a giant single disk.

Each disk needs to be attached individually to a controller.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Olaf6541 said:

Yeah on a software level to creating a single array, not actually fusing hardware together to form a giant single disk.

Each disk needs to be attached individually to a controller.

Well something like a graphics card isn't a disk, but does that then mean there's no actual way for the motherboard to interpret the bandwidth associated with the device?

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, fasauceome said:

Then say it's scaled down, double PCIe x4 for whatever reason (yes I know there's such a thing as x8). If you want to combine smaller inputs would it be possible?

Probably not, because a device can only use a single slot for communication.

It's not like straws where you can just put two side by side and have double.

The PCI device needs to communicate with the computer just like your PC communicates with a router, and you can't split an ethernet cable into two and have two computers connected at once. Each computer needs it's own ethernet cable connected to it's own ethernet port on the router.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, fasauceome said:

Well something like a graphics card isn't a disk, but does that then mean there's no actual way for the motherboard to interpret the bandwidth associated with the device?

Well, in theory you could do something like taking a board with two pcie 2.0 16x and use specialised riser wires to combine it into a 32x interface to get more bandwith on pcie 2.0 but thats the same as pcie 3.0 16x so i don't really see why you try to come up with a something that is already there...

So on the motherboard side: just get a new motherboard with newer pcie specs, and for the card side: it was designed for that specific pcie generation and lane count so you don't need more.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, fasauceome said:

If I hacked one together myself, would the motherboard know what to do with it?

thats the thing, this would require custom siliicon, which i highly doubt any silicon manufacturer with a fab that can make this would bother spending fab time on this.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Olaf6541 said:

Well, in theory you could do something like taking a board with two pcie 2.0 16x and use specialised riser wires to combine it into a 32x interface to get more bandwith on pcie 2.0 but thats the same as pcie 3.0 16x so i don't really see why you try to come up with a something that is already there...

So on the motherboard side: just get a new motherboard with newer pcie specs, and for the card side: it was designed for that specific pcie generation and lane count so you don't need more.

Well I'm not going to be doing this myself, just some food for thought, more of a question on how motherboards interpret that incoming data and if you could create a pseudo PCIe 3.0 bus. Might be "practical" if you need more bandwidth than is available on any bus currently, even though that's unlikely.

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, manikyath said:

thats the thing, this would require custom siliicon, which i highly doubt any silicon manufacturer with a fab that can make this would bother spending fab time on this.

Well the real question was if it needed silicon (i.e. a logic chip) or just a wired connection for each pin, because if the motherboard understood the signal then no logic would be needed

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, fasauceome said:

Well the real question was if it needed silicon (i.e. a logic chip) or just a wired connection for each pin, because if the motherboard understood the signal then no logic would be needed

you need the logic, because you need something to translate for example 2 pcie 2.0 signals into one pcie 3.0 signal.

 

computers arent magical, and neither is copper.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, fasauceome said:

Well I'm not going to be doing this myself, just some food for thought, more of a question on how motherboards interpret that incoming data and if you could create a pseudo PCIe 3.0 bus. Might be "practical" if you need more bandwidth than is available on any bus currently, even though that's unlikely.

Yeah I mean pcie 3.0 is 1GB/s per lane so that's already pretty fast.

Now if you would invert the problem, it would be interesting to hook up a pcie 2.0 16x gpu to pcie 4.0 4x (if inserted directly it only runs at pcie 2.0 4x)

This would require the data pins of pcie 4.0 to be split into 4 data lanes, but for what you originally asked: if your card needs more pcie, get a better motherboard.

So that's why splitting is more interesting than combining data links: combining is solved by making it faster each new generation (pcie 1.0-2.0-3.0, sata I-II-III).

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, manikyath said:

computers arent magical

Idk about that one

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, fasauceome said:

Idk about that one

sorry to shatter the bubble, computers arent magic, and IT pro's arent wizards.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, manikyath said:

sorry to shatter the bubble, computers arent magic, and IT pro's arent wizards.

Semiconductors are based on quantum mechanics, which is basically scientific magic.

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, fasauceome said:

A PCIe splitter is a simple enough concept, take one bus and make it accept multiple signals. However, is the opposite possible? Could you wire a single PCIe x16 bus into 2 x16 slots on a board? The reason I ask is because PCIe 2.0 is half the bandwidth of 3.0, and 4.0 will double that, so if you needed a bunch of bandwidth but had legacy slots, could they be addressed as one connection

 

I don't see how what you're thinking of doing would actually work.

The pci-e lanes can not be combined just like that in your motherboard, it's not like you have a 48 port ethernet switch (where each port is analogy for a x1 lane). It's more like having multiple pci-e controllers inside the chipset and in the cpu and each controller can do 16 lanes, and allows for those lanes to be arranged in multiple combinations (like 1x16, 2x8, 1x8+2x4, 1x4+4x1 etc ... you have a limited number of devices allowed per set of 16 lanes)

These controllers each with a number of lanes can't work together to mix their lanes with another controller's lanes, each controller has its own region of memory where data packets are moved and all that.

Also keep in mind that on most motherboards, you have some slots that are more special than others, as they're connected to the cpu and therefore have higher read/write speeds to computer RAM. Other pci-e slots come from the chipset, which has a communication channel with the cpu that runs at lower speed.

So for example a video card running at pci-e x16 v3.0 from cpu can read or write from cpu at  16 x ~970 MB/s = ~15.8 GB/s  but if you have the video card in a pci-e x16 v2.0 slot created by the chipset you have 16 x 500 MB = 8 GB/s between the video card and chipset, but the chipset has only 4 GB/s between itself and the cpu (therefore RAM) ... so really your video card would only read and write data from ram at 4 GB/s

Most CPUs only give 16 lanes which can be arranged by motherboard makers for video cards into x16 or 2 x 8 but that's about it. Exceptions are processors like Threadripper with 60 lanes which can be arranged in x16 slots or subsets of a x16

 

So you're making the assumption that if you have 2 x16 v2.0 slots on the motherboard, you may have 2 x 16 x 500 MB/s = 16 GB/s which is equivalent to pci-e x16 v3.0 , so you think in theory you could make a chip that would behave like a 1x16 v3.0  <---> 2x16 v2.0 converter  but you would either have those x16 v2.0 slots on separate controllers (so not possible) , or you wouldn't actually have 2x16 electrically, you would have 2x8 electrically on the moterboard.

 

The other way is easy because the chips like PLX create 32 lanes and arrange them in 2 x 16 or whatever combination desired and combined and buffer the data from all devices to push into a single x16 (or smaller channel) - the chip is dealing with a single "controller" on the motherboard, so it works.

 

PS. Another reason why it would be super hard is because pci-e 2.0 uses a particular encoding (8b/10b) for data packets and pci-e 3.0 uses another encoding (128/130b) and pci-e v4 keeps this encoding.  So any chip that converts between pci-e 3.0 and 2.0 would have to be fast enough and have enough cache memory to process 1 GB/s per lane of data packets encoded in 128b/130b and convert these in 8b/10b encoded data packets (therefore losing around 1.5% of bandwidth in overhead and do a whole load of other things, all on the fly.  Its not easy, it's a lot of transistors, it's a lot of silicon die space for something that would be very low demand so probably not worth investing hundreds of thousands of dollars in designing such chip.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 year later...

Well computers aren´t magic, but PCIX can be split and combined. that is if you can find some pice of harware to do it, or you can basically jerryrig it from chinease PCIX extensions.

Take in to consideration you have same limitation when splitting as when combining, so to sum it up PCI Express can only be split or combined in to 16X 8X 4X 2X 1X, so 2 x8 can be combined in to 16X, and so on.

But take in to consideration, this is only valid to PCIX directly connected to CPU, and you can not combine PCIX lanes from different CPU´s.

 

With that being said, what would be the practical application of this? motherboards do splitting and combining to a certain degree, depending on architecture, but let´s not get in to Intel capacity of multiplexing and offering over provisioning of PCIX.

 

Take a basic AMD ryzen architecture(as AMD only supports splitting and combining of PCIX, if memory serves me well x570 is the only exception to this rule and treat it separately as it has some quirks):

 

Ryzen(TR not included ;) ) has 24PCIX lanes, 4 gen3 lanes go to chipset which multiplexes them and spits out up to 7PCIX 2.0 or various peripherals, depends on the motherboard.

So we are left with 20PCIX 3.0 dirrectly connected to the CPU, this is alocated 4X to NVME, and 16X to PCIE slots, that can be used like following:

1 of 16X

2 of 8X

1 of 8X and 2 of 4X

4 of 4X

 

 

What would be the purpose of combining PCIX? motherboard already does this, better look for a motherboard architecture that suites your needs....

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 years later...
On 4/19/2020 at 9:48 AM, CrimsonMars said:

Well computers aren´t magic, but PCIX can be split and combined. that is if you can find some pice of harware to do it, or you can basically jerryrig it from chinease PCIX extensions.

Take in to consideration you have same limitation when splitting as when combining, so to sum it up PCI Express can only be split or combined in to 16X 8X 4X 2X 1X, so 2 x8 can be combined in to 16X, and so on.

But take in to consideration, this is only valid to PCIX directly connected to CPU, and you can not combine PCIX lanes from different CPU´s.

 

With that being said, what would be the practical application of this? motherboards do splitting and combining to a certain degree, depending on architecture, but let´s not get in to Intel capacity of multiplexing and offering over provisioning of PCIX.

 

Take a basic AMD ryzen architecture(as AMD only supports splitting and combining of PCIX, if memory serves me well x570 is the only exception to this rule and treat it separately as it has some quirks):

 

Ryzen(TR not included ;) ) has 24PCIX lanes, 4 gen3 lanes go to chipset which multiplexes them and spits out up to 7PCIX 2.0 or various peripherals, depends on the motherboard.

So we are left with 20PCIX 3.0 dirrectly connected to the CPU, this is alocated 4X to NVME, and 16X to PCIE slots, that can be used like following:

1 of 16X

2 of 8X

1 of 8X and 2 of 4X

4 of 4X

 

 

What would be the purpose of combining PCIX? motherboard already does this, better look for a motherboard architecture that suites your needs....

 

 

Just a note, PCIX and PCIE are not the same thing.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×