Jump to content

Dual PCI with 16 lanes

I like that motherboards now have so many M.2 slots on them but in all honesty they are great until an M.2 fails or needs an upgrade.

With all the price drops in storage I find myself wondering why on earth there are not more motherboards with a dual full 16 lane PCI so that daughter cards that can carry 4 x M.2  are not more prevalent.

I wanted to purchase such a card a while back but whilst it would fit in my system I had to drop my PCI 4 16 lanes down to 8 if I populated the second slot. loss on GPU side of things was non existent with a 3080 as it cannot max out the PCI gen 3 16 lanes. The reason I did not buy the card was due to the fact that I could only power 2 out of 4 M.2 drives as each require 4 lanes.

So I simply populated the 3 onboard added a daughter card low cost single PCI x 4 for a 4th drive and then in the mini PCIe slot that disables 2 sata port of the 6 onboard added another dirt cheap gen 3 2TB m.2 for local backups.

Is there a technical difficulty i am overlooking as to why I can comfortably run 5 M.2 SSDs in my system but am unable to run them via a PCI card ? Or is this something that is simply denied the consumer to protect the HEDT market where many motherboards were delivered with an in the box PCI card for 4 x M.2 as Gigabyte and Asus have done in the past ?

Link to comment
Share on other sites

Link to post
Share on other sites

The reason you don't get 2 pci-e x16 slots with 16 lanes in each is because the CPU doesn't offer that many pci-e lanes.

For example socket AM4 has 24 pci-e lanes, so you get 16 lanes for video card, 4 to connect to chipset and 4 to a m.2 connector, and the other pci-e lanes come from chipset.

 

AM5 ...I didn't keep up with the specs but I think it has an extra 4 lanes, so 28 lanes or something like that.

 

going to pci-e 4.0 and pci-e 5.0 makes it harder and harder to control the signal quality, and the maximum allowed trace length (distance from the cpu pin to the actual chip on a card is smaller and smaller.  So you need to leave some spacing between the first pci-e x16 slot because modern video cards are thick, which means the other pci-e x16 slot must be quite a distance away from the cpu, so you need pci-e buffers/redrivers/amplifiers to extend the maximum length and keep signals clean... and these chips are expensive.

Link to comment
Share on other sites

Link to post
Share on other sites

Mainstream platforms don't even have 32 PCIe (yes there's an e, not been PCI for many years) That's why. 

 

Ryzen has slightly more than intent IIRC but they are still less than 32 overall 

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

thankyou for the info....

I was curious as it is a devil of an issue with a full custom waterloop and a simple PCI card carrying 4 m.2 would be such a godsend. simply pull it swap ssd put it back.

No major work involved. I can only hope that future designs can over come the present difficulties then.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, johnno23 said:

thankyou for the info....

I was curious as it is a devil of an issue with a full custom waterloop and a simple PCI card carrying 4 m.2 would be such a godsend. simply pull it swap ssd put it back.

No major work involved. I can only hope that future designs can over come the present difficulties then.

The boards and CPUs do exist, they're just reservered for the workstation platforms. Threadripper for example has loads of PCIe lanes.

 

They don't include loads of PCIe connectivity on mainstream platforms partially to save money, as the majority of people only really have a single GPU and then a couplke of NVMe drives so don't need the extra slots anyway. They also do it to prevent eating into their workstation/HEDT platform sales. Typically those that need large amounts of PCIe lanes will be using the system for commercial purposes so will be willing to spend more for the features they need.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, johnno23 said:

thankyou for the info....

I was curious as it is a devil of an issue with a full custom waterloop and a simple PCI card carrying 4 m.2 would be such a godsend. simply pull it swap ssd put it back.

No major work involved. I can only hope that future designs can over come the present difficulties then.

that's unlikely to happen, with bandwidth increases with PCIe 5.0, if anything it would make sense to maintain or even reduce the lane count on consumer platforms. 

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Oshino Shinobu said:

They also do it to prevent eating into their workstation/HEDT platform sales. Typically those that need large amounts of PCIe lanes will be using the system for commercial purposes so will be willing to spend more for the features they need.

That's what I was thinking but was hoping that over time they might start to trickle into the more consumer based arena.....seems like wishful thinking on my part then.

I was hoping the transition from HDDs to SSD and the smaller form factor of M.2 might create a push towards the PCI card offerings. especially when today I can buy 4 x 2TB of M.2 storage for under 400USD I paid over 700USD for my first 2GB scis HDD LOL

11 minutes ago, GOTSpectrum said:

that's unlikely to happen, with bandwidth increases with PCIe 5.0, if anything it would make sense to maintain or even reduce the lane count on consumer platforms. 

And there go my dreams of kitting out a card with 4 x 4TB in the future on a daughter card. Oh well.

thanks again for the info and iinsight.

Link to comment
Share on other sites

Link to post
Share on other sites

more lanes on desktop isnt gonna happen.

 

if you pay attention to the motherboard market at all, it becomes apparent that breaking out those lanes into many slots is becoming a rare sight. every board is just one x16 slot (in the second position) one or two 1x slots, and the rest is M.2 and plastic "heatsinks".

 

meanwhile, insert AMD EPYC under my desk here...

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×