Jump to content

PCIe Splitter/Sli?

Hi all,

 

i've put this here as i think it would get more accurate feedback than posting in the videocard sub forum, and I would assume more experience with PCi splitters in mining, however mods please feel free to relocate it its unsuitable.

 

So im toying with the idea of putting dual GPU onto an mITX. First it cant be done, but, why not?

 

So there's these PCIe risers and splitters that exist, if 2 cards are put onto the end of a splitter and connected via SLI bridge, is there any reason why they wouldn't work? We have dual GPU cards, and we've had a GTX690 split into a 680 and quadro on the same PCI slot.

 

I understand the bandwidth would be shared across the 2 cards, but if a PCIe 3.0 16x slot is being split into 2x PCIe 3.0 8x slots there should be no hit here.

 

If anyone has tried this I would be very interested in your results. I have an ITX board now I just don't have two cards to test for myself.  (only have 750ti which cant SLI and 780Ti)

 

For reference this is what i'm talking about for the riser/splitter.

IKXNGrY.jpg

 

On one hand I think there's no way this would work, on the other, I don't see why not.

 

I actually wouldn't mind picking up some 2nd hand cheap cards like 660's to test, but if anyone knows I'd love t hear your input.

Link to comment
Share on other sites

Link to post
Share on other sites

Well, I don't see too much circuitry on that board... that thing could "passively" divide bandwidth so expect loses....

Try to find a splitter that's a switch (like PLX craps)

Actually, that is meant for server boards which have plx chips already on board. The male PCIe connector, is just that, a connector. It doesn't actually carry the PCIe signal standard. 

 

For example: I have a dell poweredge 2950, which have these PCIe risers:

fig-1.jpg

 

The riser itself uses a PCIe x16 male connector, but the signal isn't split on the riser, but rather split before hand by the motherboard.

 

This is going to be the case with anything passive that the OP is going to try an use.

 

TL;DR: The pin layout isn't compatible, the PCIe lanes need to be split before the riser.

▶ Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning. - Einstein◀

Please remember to mark a thread as solved if your issue has been fixed, it helps other who may stumble across the thread at a later point in time.

Link to comment
Share on other sites

Link to post
Share on other sites

Actually, that is meant for server boards which have plx chips already on board. The male PCIe connector, is just that, a connector. It doesn't actually carry the PCIe signal standard. 

 

For example: I have a dell poweredge 2950, which have these PCIe risers:

fig-1.jpg

 

The riser itself uses a PCIe x16 male connector, but the signal isn't split on the riser, but rather split before hand by the motherboard.

 

This is going to be the case with anything passive that the OP is going to try an use.

 

TL;DR: The pin layout isn't compatible, the PCIe lanes need to be split before the riser.

Came here to say that. 

I actually think you can get away with xfire on an active PCIe back plane, though I bet that does dampen the performance, but I doubt sli would work...someone correct me if I'm wrong. 

muh specs 

Gaming and HTPC (reparations)- ASUS 1080, MSI X99A SLI Plus, 5820k- 4.5GHz @ 1.25v, asetek based 360mm AIO, RM 1000x, 16GB memory, 750D with front USB 2.0 replaced with 3.0  ports, 2 250GB 850 EVOs in Raid 0 (why not, only has games on it), some hard drives

Screens- Acer preditor XB241H (1080p, 144Hz Gsync), LG 1080p ultrawide, (all mounted) directly wired to TV in other room

Stuff- k70 with reds, steel series rival, g13, full desk covering mouse mat

All parts black

Workstation(desk)- 3770k, 970 reference, 16GB of some crucial memory, a motherboard of some kind I don't remember, Micomsoft SC-512N1-L/DVI, CM Storm Trooper (It's got a handle, can you handle that?), 240mm Asetek based AIO, Crucial M550 256GB (upgrade soon), some hard drives, disc drives, and hot swap bays

Screens- 3  ASUS VN248H-P IPS 1080p screens mounted on a stand, some old tv on the wall above it. 

Stuff- Epicgear defiant (solderless swappable switches), g600, moutned mic and other stuff. 

Laptop docking area- 2 1440p korean monitors mounted, one AHVA matte, one samsung PLS gloss (very annoying, yes). Trashy Razer blackwidow chroma...I mean like the J key doesn't click anymore. I got a model M i use on it to, but its time for a new keyboard. Some edgy Utechsmart mouse similar to g600. Hooked to laptop dock for both of my dell precision laptops. (not only docking area)

Shelf- i7-2600 non-k (has vt-d), 380t, some ASUS sandy itx board, intel quad nic. Currently hosts shared files, setting up as pfsense box in VM. Also acts as spare gaming PC with a 580 or whatever someone brings. Hooked into laptop dock area via usb switch

Link to comment
Share on other sites

Link to post
Share on other sites

To the OP, if you have the cash and will, look into: http://www.cyclone.com/products/expansion_backplanes/

They also have enclosed systems: http://www.cyclone.com/products/expansion_systems/index.php

 

Basically, an PCIe host card is added to the actual computer, and then a PCIe bridge is added on the expansion board.

 

Or you can buy a switched riser: http://www.cyclone.com/products/expansion_backplanes/pcie2-437.php

pcie2_437.jpg

▶ Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning. - Einstein◀

Please remember to mark a thread as solved if your issue has been fixed, it helps other who may stumble across the thread at a later point in time.

Link to comment
Share on other sites

Link to post
Share on other sites

To the OP, if you have the cash and will, look into: http://www.cyclone.com/products/expansion_backplanes/

They also have enclosed systems: http://www.cyclone.com/products/expansion_systems/index.php

 

Basically, an PCIe host card is added to the actual computer, and then a PCIe bridge is added on the expansion board.

 

Or you can buy a switched riser: http://www.cyclone.com/products/expansion_backplanes/pcie2-437.php

pcie2_437.jpg

 

 

thats cool and actually this is the closest someone has actually come to answering my query, so to everyone who has contributed to this thread already, thank you so much, sincerely.

 

That card splits out to PCIe 2.0 @ 4x from my understanding. Let me state what im trying to achive though coming from the above, I dont think its going to happen anymore.

 

I wanted to fit dual GPU's (970's) into an NCASE. It seems i'm going to have to wait for a 990 or MARS card to come out to get what im trying to achieve. the other option is cut in a mATX instead of trying to split off a single PCIe 16x slot on an mITX board which is what I was hoping to do.

Link to comment
Share on other sites

Link to post
Share on other sites

thats cool and actually this is the closest someone has actually come to answering my query, so to everyone who has contributed to this thread already, thank you so much, sincerely.

 

That card splits out to PCIe 2.0 @ 4x from my understanding. Let me state what im trying to achive though coming from the above, I dont think its going to happen anymore.

 

I wanted to fit dual GPU's (970's) into an NCASE. It seems i'm going to have to wait for a 990 or MARS card to come out to get what im trying to achieve. the other option is cut in a mATX instead of trying to split off a single PCIe 16x slot on an mITX board which is what I was hoping to do.

Yes, the card does split at gen 2.0 4x link speeds. There aren't any gen 3.0 x8 switched risers that I know about. They also have the same riser, but with a ribbon cable, so you could potentially place it anyway you want in the case. Fitting dual 970, I don't think it will happen with the Ncase M1 regardless. Unless you put a waterblock on the 970s and make them single slot by modding and removing the top DVI connector on each slot. It is possible to do that, as a matter of fact, there was someone on the EVGA forums who provided aftermarket backplates and has instructions on how to mod the top DVI connector off of the card. I know the guy has them for the TItan and 700 series cards, not so sure if he got around to the 900 series yet.

 

Of note is that if you do use a switched riser, you won't be able to use SLI. Almost all risers split to 4x/4x/4x. Nvidia requires 8x/8x for SLI, but AMD on the other hand doesn't have this requirement. The only real way to get around this is if you want and external PCIe enclosure, like: http://www.netstor.com.tw/_03/03_02.php?MTEx

 

EDIT: You probably are better off trying to fit a mATX motherboard, see: http://hardforum.com/showthread.php?t=1812629

▶ Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning. - Einstein◀

Please remember to mark a thread as solved if your issue has been fixed, it helps other who may stumble across the thread at a later point in time.

Link to comment
Share on other sites

Link to post
Share on other sites

I've always wanted to try ghetto xfire on a mini-itx board like this. I do think that's possible, but I would have my doubts about SLI, even with an active 16x to 8x board if that exists, which I actually think does. 

muh specs 

Gaming and HTPC (reparations)- ASUS 1080, MSI X99A SLI Plus, 5820k- 4.5GHz @ 1.25v, asetek based 360mm AIO, RM 1000x, 16GB memory, 750D with front USB 2.0 replaced with 3.0  ports, 2 250GB 850 EVOs in Raid 0 (why not, only has games on it), some hard drives

Screens- Acer preditor XB241H (1080p, 144Hz Gsync), LG 1080p ultrawide, (all mounted) directly wired to TV in other room

Stuff- k70 with reds, steel series rival, g13, full desk covering mouse mat

All parts black

Workstation(desk)- 3770k, 970 reference, 16GB of some crucial memory, a motherboard of some kind I don't remember, Micomsoft SC-512N1-L/DVI, CM Storm Trooper (It's got a handle, can you handle that?), 240mm Asetek based AIO, Crucial M550 256GB (upgrade soon), some hard drives, disc drives, and hot swap bays

Screens- 3  ASUS VN248H-P IPS 1080p screens mounted on a stand, some old tv on the wall above it. 

Stuff- Epicgear defiant (solderless swappable switches), g600, moutned mic and other stuff. 

Laptop docking area- 2 1440p korean monitors mounted, one AHVA matte, one samsung PLS gloss (very annoying, yes). Trashy Razer blackwidow chroma...I mean like the J key doesn't click anymore. I got a model M i use on it to, but its time for a new keyboard. Some edgy Utechsmart mouse similar to g600. Hooked to laptop dock for both of my dell precision laptops. (not only docking area)

Shelf- i7-2600 non-k (has vt-d), 380t, some ASUS sandy itx board, intel quad nic. Currently hosts shared files, setting up as pfsense box in VM. Also acts as spare gaming PC with a 580 or whatever someone brings. Hooked into laptop dock area via usb switch

Link to comment
Share on other sites

Link to post
Share on other sites

I've always wanted to try ghetto xfire on a mini-itx board like this. I do think that's possible, but I would have my doubts about SLI, even with an active 16x to 8x board if that exists, which I actually think does. 

Its no different than having an 8x/8x on a motherboard running though a PLX chip. The nice thing about external PCIe enclosures is that unlike Thunderbolt, it is driver transparent, meaning that the hardware basically is transparent to GPU drivers. 

The only exception is that you won't be able to run SLI on 4x/4x due to Nvidia's restrictions, but with AMD you can run crossfire. I've seen people use this for setting up small GPU clusters for rendering.

▶ Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning. - Einstein◀

Please remember to mark a thread as solved if your issue has been fixed, it helps other who may stumble across the thread at a later point in time.

Link to comment
Share on other sites

Link to post
Share on other sites

I've actually found someone who manufactures custom risers. Hes currently running 4x 290X's on a mITX.

 

Basically as been stated, I need a 2 slot riser with a PLX for the negotiation before it goes into the 16x slot on the motherboard.

 

time will tell.

 

 

Fitting dual 970, I don't think it will happen with the Ncase M1 regardless. Unless you put a waterblock on the 970s and make them single slot by modding and removing the top DVI connector on each slot. It is possible to do that, as a matter of fact, there was someone on the EVGA forums who provided aftermarket backplates and has instructions on how to mod the top DVI connector off of the card.

 

 

If it was in the standard horizontal format you are right but because of the riser needed you couldn't mount them this way anyway. I intend on flipping them 90 degrees and mounting them in to where the rear 'fan mount' currently is, then the radiator will be mounted into the floor.

Link to comment
Share on other sites

Link to post
Share on other sites

Its no different than having an 8x/8x on a motherboard running though a PLX chip. The nice thing about external PCIe enclosures is that unlike Thunderbolt, it is driver transparent, meaning that the hardware basically is transparent to GPU drivers. 

The only exception is that you won't be able to run SLI on 4x/4x due to Nvidia's restrictions, but with AMD you can run crossfire. I've seen people use this for setting up small GPU clusters for rendering.

I know about backplanes too, but I know that windows have a 6 gpu limit and I believe most versions of linux can do 8. 

yDh3pdh.jpg

muh specs 

Gaming and HTPC (reparations)- ASUS 1080, MSI X99A SLI Plus, 5820k- 4.5GHz @ 1.25v, asetek based 360mm AIO, RM 1000x, 16GB memory, 750D with front USB 2.0 replaced with 3.0  ports, 2 250GB 850 EVOs in Raid 0 (why not, only has games on it), some hard drives

Screens- Acer preditor XB241H (1080p, 144Hz Gsync), LG 1080p ultrawide, (all mounted) directly wired to TV in other room

Stuff- k70 with reds, steel series rival, g13, full desk covering mouse mat

All parts black

Workstation(desk)- 3770k, 970 reference, 16GB of some crucial memory, a motherboard of some kind I don't remember, Micomsoft SC-512N1-L/DVI, CM Storm Trooper (It's got a handle, can you handle that?), 240mm Asetek based AIO, Crucial M550 256GB (upgrade soon), some hard drives, disc drives, and hot swap bays

Screens- 3  ASUS VN248H-P IPS 1080p screens mounted on a stand, some old tv on the wall above it. 

Stuff- Epicgear defiant (solderless swappable switches), g600, moutned mic and other stuff. 

Laptop docking area- 2 1440p korean monitors mounted, one AHVA matte, one samsung PLS gloss (very annoying, yes). Trashy Razer blackwidow chroma...I mean like the J key doesn't click anymore. I got a model M i use on it to, but its time for a new keyboard. Some edgy Utechsmart mouse similar to g600. Hooked to laptop dock for both of my dell precision laptops. (not only docking area)

Shelf- i7-2600 non-k (has vt-d), 380t, some ASUS sandy itx board, intel quad nic. Currently hosts shared files, setting up as pfsense box in VM. Also acts as spare gaming PC with a 580 or whatever someone brings. Hooked into laptop dock area via usb switch

Link to comment
Share on other sites

Link to post
Share on other sites

I know about backplanes too, but I know that windows have a 6 gpu limit and I believe most versions of linux can do 8. 

-snip-

Actually windows can do more than 6 (in the professional versions) past Windows 7, it also has to be the 64 bit edition. In most cases, its the motherboard's BIOS that has trouble allocating IRQ resources and shared memory for many GPUs or PCIe devices. About 2-3 years ago, seeing such setups like the one you posted were only really doable with workstation or enterprise grade motherboards, but with the advent of UEFI, it really is a much better, efficient, and clean way for initializing PCIe devices.

 

Most of the artificial '8' GPU limit was artificial driver restrictions, at least for AMD and Nvidia consumer cards. Both companies professional series of GPUs and computational units, like Nvidia Tesla and AMD Firepro, do not have this limitation. The best way to get around the driver limitation is with hardware virtualization, such as Linux KVM, where a the physical PCIe devices get passed through to the guest OS. At that point the host OS mainly keeps track of what PCIe devices belongs to which guest. As you can guess, people usually run numerous guests OSs to take advantage of this.

 

I've actually found someone who manufactures custom risers. Hes currently running 4x 290X's on a mITX.

 

Basically as been stated, I need a 2 slot riser with a PLX for the negotiation before it goes into the 16x slot on the motherboard.

 

time will tell.

 

 
 

 

If it was in the standard horizontal format you are right but because of the riser needed you couldn't mount them this way anyway. I intend on flipping them 90 degrees and mounting them in to where the rear 'fan mount' currently is, then the radiator will be mounted into the floor.

RSC-R2UG-A2E16-A.jpg

So you'd mod the case, meaning that the GPUs can be mounted vertically rather than horizontally. You might be able to use this: http://www.compsource.com/pn/RSCR2UGA2E16A/Supermicro_428/RscR2ugA2e16A-1u-Pas--Pcie-To-Pcie16-RSCR2UGA2E16A/

It is active, even though it says passive on the sellers website, the information from SuperMicro found here says otherwise: http://www.supermicro.com/resourceapps/riser.aspx

 

Check around ebay, they tend to go for under $50. 

 

It is also PCIe Gen3 x16. The only thing you would need is a PCIe ribbon cable to mount the riser any way you want in the case: http://www.digikey.com/catalog/en/partgroup/pci-express/30025. The riser also has mounting points, so you could attach it to the case if you wanted to.

 

PLEASE: keep in mind that this may or may not work for you. SuperMicro's documentation on this part is non existent, so please email their technical inquiry department. Basically check that it does use a PLX chip. YMMV.

▶ Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning. - Einstein◀

Please remember to mark a thread as solved if your issue has been fixed, it helps other who may stumble across the thread at a later point in time.

Link to comment
Share on other sites

Link to post
Share on other sites

I'd like to preface that i skipped through the comments, so if someone else already recommended this then I'm sorry.

 

With the extra space you'd need to accommodate for the extra space in a mITX enclousre, why not just go mATX? They're very similar in size, and whatever case you'd be getting that has enough room for two dual slot cards, you mind as well just go for the mATX anyways.

//ccap
Link to comment
Share on other sites

Link to post
Share on other sites

Actually windows can do more than 6 (in the professional versions) past Windows 7, it also has to be the 64 bit edition. In most cases, its the motherboard's BIOS that has trouble allocating IRQ resources and shared memory for many GPUs or PCIe devices. About 2-3 years ago, seeing such setups like the one you posted were only really doable with workstation or enterprise grade motherboards, but with the advent of UEFI, it really is a much better, efficient, and clean way for initializing PCIe devices.

 

Most of the artificial '8' GPU limit was artificial driver restrictions, at least for AMD and Nvidia consumer cards. Both companies professional series of GPUs and computational units, like Nvidia Tesla and AMD Firepro, do not have this limitation. The best way to get around the driver limitation is with hardware virtualization, such as Linux KVM, where a the physical PCIe devices get passed through to the guest OS. At that point the host OS mainly keeps track of what PCIe devices belongs to which guest. As you can guess, people usually run numerous guests OSs to take advantage of this.

 

RSC-R2UG-A2E16-A.jpg

So you'd mod the case, meaning that the GPUs can be mounted vertically rather than horizontally. You might be able to use this: http://www.compsource.com/pn/RSCR2UGA2E16A/Supermicro_428/RscR2ugA2e16A-1u-Pas--Pcie-To-Pcie16-RSCR2UGA2E16A/

It is active, even though it says passive on the sellers website, the information from SuperMicro found here says otherwise: http://www.supermicro.com/resourceapps/riser.aspx

 

Check around ebay, they tend to go for under $50. 

 

It is also PCIe Gen3 x16. The only thing you would need is a PCIe ribbon cable to mount the riser any way you want in the case: http://www.digikey.com/catalog/en/partgroup/pci-express/30025. The riser also has mounting points, so you could attach it to the case if you wanted to.

 

PLEASE: keep in mind that this may or may not work for you. SuperMicro's documentation on this part is non existent, so please email their technical inquiry department. Basically check that it does use a PLX chip. YMMV.

I think on this though the connector to and from the board is PCI-X which from what i've read is completely different from PCIE and even though the slot is the same, it wont work. but, for $50, i'll give it a  go. I'm just on the look out for a pair of cheap cards t test with, some 660's or something similar. 970's(Asus Strix) have been specifically chose for their single 8 pin power connectors which makes things easy out of the 600w SFX PSU, also from my calculations of about 540w full load I'll be right on the comfortable sweet spot of this PSU. I have a removable mATX motherboard tray with the 4x PCI brackets that i can cut out and set into the back of the NCASE giving secure mounting. And yes, your thoughts are teh ame as mine, a PCI riser and then flexible extenders to allow for flexible placement of the GPU's and not where the riser card dictates.

 

I'd like to preface that i skipped through the comments, so if someone else already recommended this then I'm sorry.

 

With the extra space you'd need to accommodate for the extra space in a mITX enclousre, why not just go mATX? They're very similar in size, and whatever case you'd be getting that has enough room for two dual slot cards, you mind as well just go for the mATX anyways.

Normally, yes, i've thought about mATX and am still considering it, mostly because if I go mATX it gets me around the dual PCIe slot issue but it also allows me to go X99 (EVGA Micro X99) however in this case, millimeters matter, centimeters matter a lot more. a mITX is 170x170mm, mATX is 244x244mm which is a significant difference, specially when the vertical interior space of the NCASE is only 240mm tall. I can get around this but creating a bottom compartment, about 60mm high, this gives room for the mATX and also to mount a 240mm radiator the issue is that it then also starts to impede into the PSU mounting space, so modding will need to be done here also.

 

 

***

By no means do I think this will be an easy achievement no matter which route I take, significant work will be required, and sacrifices made. 

 

I would like to thank everyone for their contributions again and humoring me while I figure this out, i'm not saying it will happen, i'm trying to find out if it can happen. The practicality and cost of it is irrelevant (i'm not saying I have an unlimited budget, just that I don't want cost to determine whether it is, or is not possible) 

 

So far this thread has been far more constructive than I ever thought it would be, so thank you all so far. I think I made an excellent choice posting it in the folding sub forum rather than the GPU one.

Link to comment
Share on other sites

Link to post
Share on other sites

I know about backplanes too, but I know that windows have a 6 gpu limit and I believe most versions of linux can do 8. 

yDh3pdh.jpg

Holy freaking cards. If I may ask, what are they?

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 year later...

Whats this shit about relying on a chip to do the splitting? A Mini ITX motherboard has one PCIe 16x slot which connects directly to the CPU. The chip set and CPU dictate the speed and configuration (number of lanes) of each slot. The only challenge i van imagine is the initial setup where the bios would need identify 2 cards that are connected to a single slot upon start up. I cant imagine that initial identification would take much to make it work. I've recently been looking into the same sort of thing. I want to split a single 16x slot into 4 4x slots. One for GPU, one for NVME storage, one for a SAS card, and one for future use. Gen 3 4x is fine even for modern high end GPUs, according to anyone that seems to know wtf they are talking about anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, MarcWolfe said:

Whats this shit about relying on a chip to do the splitting? A Mini ITX motherboard has one PCIe 16x slot which connects directly to the CPU. The chip set and CPU dictate the speed and configuration (number of lanes) of each slot. The only challenge i van imagine is the initial setup where the bios would need identify 2 cards that are connected to a single slot upon start up. I cant imagine that initial identification would take much to make it work. I've recently been looking into the same sort of thing. I want to split a single 16x slot into 4 4x slots. One for GPU, one for NVME storage, one for a SAS card, and one for future use. Gen 3 4x is fine even for modern high end GPUs, according to anyone that seems to know wtf they are talking about anyway.

You Necro'd this thread.

 

Also, it's not possible. Although PCIe x16 has 16 physical data lanes, there's only 1 set of control signals. If you want to split PCIe lanes, you need an multiplexer, ie: PLX chip.

▶ Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning. - Einstein◀

Please remember to mark a thread as solved if your issue has been fixed, it helps other who may stumble across the thread at a later point in time.

Link to comment
Share on other sites

Link to post
Share on other sites

That's all a PLX chip does? The "control" signals? I thought those were only for initial ID, not control. Since they aren't hot swappable. Yes i realize this was 2 years old. It peaked my interest.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MarcWolfe said:

That's all a PLX chip does? The "control" signals? I thought those were only for initial ID, not control. Since they aren't hot swappable. Yes i realize this was 2 years old. It peaked my interest.

Well, the PLX chip does a lot more than that. It technically takes 1 logical PCIe 'slot' and acts like an switch between downstream slots.

See for more info: http://xillybus.com/tutorials/pci-express-tlp-pcie-primer-tutorial-guide-1

http://xillybus.com/tutorials/pci-express-tlp-pcie-primer-tutorial-guide-2

Essentially, switches PCIe signals on a first come, first serve basis.

Edited by ionbasa

▶ Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning. - Einstein◀

Please remember to mark a thread as solved if your issue has been fixed, it helps other who may stumble across the thread at a later point in time.

Link to comment
Share on other sites

Link to post
Share on other sites

I sure as hell sis not read all that (maybe i will later, it is interesting). That seams to. Have mainly been the software side. Talks aboit sharing each lane, having each lane "simultaneously" share bandwidth to 2 cards. I'm only looking to split the physical total between each card. Think less "networking" (like an internet router) and more about just splitting/dividing.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, MarcWolfe said:

I sure as hell sis not read all that (maybe i will later, it is interesting). That seams to. Have mainly been the software side. Talks aboit sharing each lane, having each lane "simultaneously" share bandwidth to 2 cards. I'm only looking to split the physical total between each card. Think less "networking" (like an internet router) and more about just splitting/dividing.

That's the thing. Its not possible to just split/divide lanes. PCIe is very similar to how an TCP/IP network stack works.

 

Think of it like this: Can you just split an ethernet cable and expect an 1Gb connection to half to 500 Mb per branch? No! You need some logic to direct traffic flow. PLX chips function the same way an network switch would. 

 

PLX chips in essence 'route' traffic to the proper destination. The CPU/chipset will see the PLX chip as an root device that further enumerates everything downstream of it. eg: Think of an tree diagram:

technology-comparisons.png

 

There's no simple way to just split the total bandwidth of 1 x16 lane to 2 x8 lanes. You need sn switch/PLX chip in there somewhere. Otherwise, it just isn't possible.

 

^^^^Note: You may not like that I say it isn't possible, but there's no way around it. Lookit up all you want, this is the best answer I can give you, unless someone else would like to chime in.

▶ Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning. - Einstein◀

Please remember to mark a thread as solved if your issue has been fixed, it helps other who may stumble across the thread at a later point in time.

Link to comment
Share on other sites

Link to post
Share on other sites

I can believe it. I'm just hoping you're wrong. I ran an idea past MSIs sales department a few weeks ago that would address this issue in a future motherboard. Hopefully they run with the idea.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 year later...

7 generation intel core (Kaby Lake) support one pci-e x8 and two pci-e x4. Three port without the switch PLX

Spoiler

Chipset.png

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×