Jump to content

Why not give NVMe more PCIe bandwidth?

Hensen Juang

If Gen5 drives can perform way faster than Gen 4 by just upgrading the PCIe bus speed (and I know, there are upgraded controllers and stuff as well), why not just give M.2 slots more PCIe bandwidth if they are really being bottlenecked? I know, NVMe drives on expansion PCIe x16 cards do exist, but even then, one drive is still only limited to 4 lanes. Our CPUs need to support more lanes and if they don't, then we can trade GPU lanes or run from chipset. But Server CPUs should have enough lanes. So, why not more bandwidth?

Microsoft owns my soul.

 

Also, Dell is evil, but HP kinda nice.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Hensen Juang said:

But Server CPUs should have enough lanes. So, why not more bandwidth?

Yes and they are a lot more money due to R&D to implement more lanes at the silicon level. It's costly to build more lanes in for consumer processors so the costs will have to increase and the mainstream consumer doesn't care about that, so you'll have to purchase server grade hardware if you want more PCIe lanes. 

CPU Cooler Tier List  || Motherboard VRMs Tier List || Motherboard Beep & POST Codes || Graphics Card Tier List || PSU Tier List 

 

Main System Specifications: 

 

CPU: AMD Ryzen 9 5950X ||  CPU Cooler: Noctua NH-D15 Air Cooler ||  RAM: Corsair Vengeance LPX 32GB(4x8GB) DDR4-3600 CL18  ||  Mobo: ASUS ROG Crosshair VIII Dark Hero X570  ||  SSD: Samsung 970 EVO 1TB M.2-2280 Boot Drive/Some Games)  ||  HDD: 2X Western Digital Caviar Blue 1TB(Game Drive)  ||  GPU: ASUS TUF Gaming RX 6900XT  ||  PSU: EVGA P2 1600W  ||  Case: Corsair 5000D Airflow  ||  Mouse: Logitech G502 Hero SE RGB  ||  Keyboard: Logitech G513 Carbon RGB with GX Blue Clicky Switches  ||  Mouse Pad: MAINGEAR ASSIST XL ||  Monitor: ASUS TUF Gaming VG34VQL1B 34" 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Hensen Juang said:

If Gen5 drives can perform way faster than Gen 4 by just upgrading the PCIe bus speed (and I know, there are upgraded controllers and stuff as well), why not just give M.2 slots more PCIe bandwidth if they are really being bottlenecked? I know, NVMe drives on expansion PCIe x16 cards do exist, but even then, one drive is still only limited to 4 lanes. Our CPUs need to support more lanes and if they don't, then we can trade GPU lanes or run from chipset. But Server CPUs should have enough lanes. So, why not more bandwidth?

But what do you need that bandwidth for?

Router:  Intel N100 (pfSense) WiFi: Zyxel NWA210AX (1.44Gbit peak at 160Mhz 2x2 MIMO, ~900Mbit at 80Mhz)

Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX
ISPs: Zen Full Fibre 900 (~915Mbit down, 115Mbit up) + Three 5G (~900Mbit down, 115Mbit up)

Folding@home Recent WUs               
Upgrading Laptop CNVIo WiFi cards to PCIe

Link to comment
Share on other sites

Link to post
Share on other sites

Typically few workloads actually max out the bandwidth on ssds, so getting the max possible isn't normally a huge issue. ESP on servers, where is really hard to push 20+ drives at the same time to the peak bandwidth. Random io and latency makes a much bigger difference.

 

There have been pcie gen 3 x8 ssds that are nice for some workloads, but I don't know of a gen 4 version. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Hensen Juang said:

why not just give M.2 slots more PCIe bandwidth if they are really being bottlenecked?

that's the thing though.. SSD's really arent being bottlenecked, and the fast SSD's have to do all sorts of trickery to even saturate the bandwidth we have at PCIe 4.0x4

 

the PCIe 16x add-in card SSD's usually are brilliantly expensive, because they need piles of flash just to saturate that link.

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, Hensen Juang said:

just give M.2 slots more PCIe bandwidth if they are really being bottlenecked?

Spoiler: They aren't.

Name me one consumer application that is constrained by PCIE 3.0x4 M.2 slots.

 

Link to comment
Share on other sites

Link to post
Share on other sites

There are few applications for consumers that benefit from or are constrained by PCIe 4.0 x4 NVMe bandwidth, let alone 5.0 bandwidth. So there's no real point.

 

On the server side, you probably want to use those lanes for other stuff, too. So you waste as few of them on storage as possible. With PCIe 5.0 you already get ~10 GB/s. If a use case needs to access a ton of data even faster, it'll probably just use RAM as a cache instead.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

I would be really pleased if they started doing mobo with only 2 x PCIe 5.0 lanes M.2 slots. I really don't need that much sequential speed, but i would like to have more M.2 slots.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, CommanderAlex said:

Yes and they are a lot more money due to R&D to implement more lanes at the silicon level. It's costly to build more lanes in for consumer processors so the costs will have to increase and the mainstream consumer doesn't care about that, so you'll have to purchase server grade hardware if you want more PCIe lanes. 

It's not really more costly though is it? The work is already done. HEDT has made a return with plenty of PCIe lanes. It's just artificial gatekeeping on the chip makers part. Case in point - $359 w3-2423 (112 lanes) vs $599 i9 13900k (20 lanes)

 

You pay what they want you to pay be it a gaming CPU or a workstation CPU.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Blue4130 said:

It's not really more costly though is it? The work is already done. HEDT has made a return with plenty of PCIe lanes. It's just artificial gatekeeping on the chip makers part. Case in point - $359 w3-2423 (112 lanes) vs $599 i9 13900k (20 lanes)

And have you seen the motherboard costs associated to fully utilize a w3-2423??? $900 and up so $1259 for 112 lanes vs $599+$250 for a typical Z790 motherboard=$849 for 20 lanes, and that's assuming the CPU is still full price which they aren't. (Lol, I can't even build a PC around a w3-2423 and W790 motherboard in PCPartPicker from CPU and motherboard options) Again, mainstream consumers aren't gonna pay $1259 for 112 lanes as it's too costly to build if all they want to do is game or build a 9-5 job PC. Costly is in the sense of semiconductor implementation, not consumer $$$.

 

2 hours ago, Blue4130 said:

You pay what they want you to pay be it a gaming CPU or a workstation CPU.

That's called market segmentation. 

CPU Cooler Tier List  || Motherboard VRMs Tier List || Motherboard Beep & POST Codes || Graphics Card Tier List || PSU Tier List 

 

Main System Specifications: 

 

CPU: AMD Ryzen 9 5950X ||  CPU Cooler: Noctua NH-D15 Air Cooler ||  RAM: Corsair Vengeance LPX 32GB(4x8GB) DDR4-3600 CL18  ||  Mobo: ASUS ROG Crosshair VIII Dark Hero X570  ||  SSD: Samsung 970 EVO 1TB M.2-2280 Boot Drive/Some Games)  ||  HDD: 2X Western Digital Caviar Blue 1TB(Game Drive)  ||  GPU: ASUS TUF Gaming RX 6900XT  ||  PSU: EVGA P2 1600W  ||  Case: Corsair 5000D Airflow  ||  Mouse: Logitech G502 Hero SE RGB  ||  Keyboard: Logitech G513 Carbon RGB with GX Blue Clicky Switches  ||  Mouse Pad: MAINGEAR ASSIST XL ||  Monitor: ASUS TUF Gaming VG34VQL1B 34" 

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, CommanderAlex said:

And have you seen the motherboard costs associated to fully utilize a w3-2423??? $900 and up so $1259 for 112 lanes vs $599+$250 for a typical Z790 motherboard=$849 for 20 lanes, and that's assuming the CPU is still full price which they aren't. (Lol, I can't even build a PC around a w3-2423 and W790 motherboard in PCPartPicker from CPU and motherboard options) Again, mainstream consumers aren't gonna pay $1259 for 112 lanes as it's too costly to build if all they want to do is game or build a 9-5 job PC. Costly is in the sense of semiconductor implementation, not consumer $$$.

 

That's called market segmentation. 

But motherboard costs are not set by the chip mfg. So when you said "It's costly to build more lanes in for consumer processors" I questioned it. I agree that the platform as a whole is more expensive. 100%, but it is not intel or amd setting the HEDT motherboard prices. They don't even make the boards. (though I guess they contribute by setting chipset price - but I have no idea what they charge for that)

 

 

15 minutes ago, CommanderAlex said:

 

That's called market segmentation. 

Which is EXACTLY what I said with my artificial gatekeeping comment.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Blue4130 said:

But motherboard costs are not set by the chip mfg. So when you said "It's costly to build more lanes in for consumer processors" I questioned it.

I'm not the one that started comparing low end server CPU prices to high end consumer CPU prices. 

 

17 hours ago, CommanderAlex said:

Yes and they are a lot more money due to R&D to implement more lanes at the silicon level. It's costly to build more lanes in for consumer processors...

 

7 minutes ago, Blue4130 said:

Which is EXACTLY what I said with my artificial gatekeeping comment.

I don't get why your becoming aggressive in tone over something that "for costly" is what I have previously defined.

CPU Cooler Tier List  || Motherboard VRMs Tier List || Motherboard Beep & POST Codes || Graphics Card Tier List || PSU Tier List 

 

Main System Specifications: 

 

CPU: AMD Ryzen 9 5950X ||  CPU Cooler: Noctua NH-D15 Air Cooler ||  RAM: Corsair Vengeance LPX 32GB(4x8GB) DDR4-3600 CL18  ||  Mobo: ASUS ROG Crosshair VIII Dark Hero X570  ||  SSD: Samsung 970 EVO 1TB M.2-2280 Boot Drive/Some Games)  ||  HDD: 2X Western Digital Caviar Blue 1TB(Game Drive)  ||  GPU: ASUS TUF Gaming RX 6900XT  ||  PSU: EVGA P2 1600W  ||  Case: Corsair 5000D Airflow  ||  Mouse: Logitech G502 Hero SE RGB  ||  Keyboard: Logitech G513 Carbon RGB with GX Blue Clicky Switches  ||  Mouse Pad: MAINGEAR ASSIST XL ||  Monitor: ASUS TUF Gaming VG34VQL1B 34" 

 

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, CommanderAlex said:

I'm not the one that started comparing low end server CPU prices to high end consumer CPU prices. 

 

 

I don't get why your becoming aggressive in tone over something that "for costly" is what I have previously defined.

 

18 hours ago, CommanderAlex said:

Yes and they are a lot more money due to R&D to implement more lanes at the silicon level. It's costly to build more lanes in for consumer processors so the costs will have to increase and the mainstream consumer doesn't care about that, so you'll have to purchase server grade hardware if you want more PCIe lanes. 

At the silicon level is it not more expensive to implement. It is the same cost for the chip maker regardless of whether it is a consumer or server chip. The cost to the consumer on the other hand...

 

That is the only part of your statment that I disagree with. To intel, the cost is the same. To the consumer, it is not.

Link to comment
Share on other sites

Link to post
Share on other sites

You need more layer in PCB, and more complicated PCB design, routing all those PCIe lanes is not that easy.
it will double the mobo cost, as just one or two layers are not enough to router 112 PCIe lanes, to not interfere with one another.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×