Jump to content

"Downsampling" PCIe G5 lanes for multiple PCIe G4 NVME SSDs?

Baciere

Trying to come up with a title for this was a pain, and I'm still not sure that it fully encompasses what I'm actually asking for here.

But; to dive into the question at hand -
I'm looking to see if this is a sound concept; Effectively taking 4x PCIe Gen4 NVME SSDs and allowing them to be accessed over fewer, though higher bandwidth, PCIe Gen5 lanes --- saving on semi-valuable PCIe lanes, and on new hardware costs (as in my use case, I have 2 PCIe Gen 4 NVME SSDs as it stands). Instead of burning 16 lanes at Gen4 speeds, I could utilize 8 at Gen5 speeds and get the same (or, close enough) bandwidth to suck most of the performance out of the drives while not needing to shell out cash on Gen5 SSDs themselves.

The goal would not be a hardware raid controller; but to allow software direct access to the drives themselves. In my personal use case this *would* be an array of 4, 2TB NVME SSDs in Raid 0 for maximum speed, so a raid controller wouldn't be out of the question, however.


I've been made aware of the Aorus Gen5 AIC Adaptor; which seems almost exactly like what I'm interested in for this use case, but I have a few questions


1: Will this actually save me Gen5 lanes; or will they operate as Gen4 lanes and I'm still back at square 1 but with a fancy hunk of metal in the way
2: Is this even something reasonable to care about on Zen4 on an X670 boards, given the number of available general purpose PCIe lanes?
3: Am I absolutely mad for attempting something like this in the first place, or should I just get a larger, single Gen5 NVME SSD for a less 'messy' build?

~ Al Baciere Al Lupo
 

Link to comment
Share on other sites

Link to post
Share on other sites

I may be wrong here, but I believe the correct definition for "Downsampling," you were thinking of is PCIe Bifurcation. 

CPU Cooler Tier List  || Motherboard VRMs Tier List || Motherboard Beep & POST Codes || Graphics Card Tier List || PSU Tier List 

 

Main System Specifications: 

 

CPU: AMD Ryzen 9 5950X ||  CPU Cooler: Noctua NH-D15 Air Cooler ||  RAM: Corsair Vengeance LPX 32GB(4x8GB) DDR4-3600 CL18  ||  Mobo: ASUS ROG Crosshair VIII Dark Hero X570  ||  SSD: Samsung 970 EVO 1TB M.2-2280 Boot Drive/Some Games)  ||  HDD: 2X Western Digital Caviar Blue 1TB(Game Drive)  ||  GPU: ASUS TUF Gaming RX 6900XT  ||  PSU: EVGA P2 1600W  ||  Case: Corsair 5000D Airflow  ||  Mouse: Logitech G502 Hero SE RGB  ||  Keyboard: Logitech G513 Carbon RGB with GX Blue Clicky Switches  ||  Mouse Pad: MAINGEAR ASSIST XL ||  Monitor: ASUS TUF Gaming VG34VQL1B 34" 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, CommanderAlex said:

I may be wrong here, but I believe the correct definition for "Downsampling," you were thinking of is PCIe Bifurcation. 

Ah, y'know, now that I have the word, that is exactly the word, thank you.

~ Al Baciere Al Lupo
 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Baciere said:

Ah, y'know, now that I have the word, that is exactly the word, thank you.

No problem, I would attempt to answer more of the questions you were asking, but I don't have a full scope/understanding related to X670 chipset and the new PCIe Gen 5.0 lanes, unless it's similar to previous PCIe generations. 

CPU Cooler Tier List  || Motherboard VRMs Tier List || Motherboard Beep & POST Codes || Graphics Card Tier List || PSU Tier List 

 

Main System Specifications: 

 

CPU: AMD Ryzen 9 5950X ||  CPU Cooler: Noctua NH-D15 Air Cooler ||  RAM: Corsair Vengeance LPX 32GB(4x8GB) DDR4-3600 CL18  ||  Mobo: ASUS ROG Crosshair VIII Dark Hero X570  ||  SSD: Samsung 970 EVO 1TB M.2-2280 Boot Drive/Some Games)  ||  HDD: 2X Western Digital Caviar Blue 1TB(Game Drive)  ||  GPU: ASUS TUF Gaming RX 6900XT  ||  PSU: EVGA P2 1600W  ||  Case: Corsair 5000D Airflow  ||  Mouse: Logitech G502 Hero SE RGB  ||  Keyboard: Logitech G513 Carbon RGB with GX Blue Clicky Switches  ||  Mouse Pad: MAINGEAR ASSIST XL ||  Monitor: ASUS TUF Gaming VG34VQL1B 34" 

 

Link to comment
Share on other sites

Link to post
Share on other sites

As I understood it; They are capable of running at lower speeds; but it wouldn't bifurcate on it's own; meaning I'd be eating 16x PCIe G5 lanes while only running them at G4 speeds.

That said, attempting to find even a raid controller that allows me to install 4x NVME SSDs and refactor them to 8x Gen5 lanes doesn't seem to exist; so I may just have to bite the loss of lanes.

Given that We only have 28 Gen5 lanes (from the CPU), of which  16 of the Gen5 are available for slots, with 12 Gen4 lanes available from the X670E; my intent was to have 8 Gen5 lanes for GPU, 8 Gen5 lanes for NVME storage as originally planned out; and then use my 12 Gen4 lanes for SFP+ NIC/spare monitor GPU/other misc such.

~ Al Baciere Al Lupo
 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, CommanderAlex said:

I may be wrong here, but I believe the correct definition for "Downsampling," you were thinking of is PCIe Bifurcation. 

Bifucration is just the act of splitting a single slot into multiple slots with a fraction of the lanes.

 

25 minutes ago, Baciere said:

Trying to come up with a title for this was a pain, and I'm still not sure that it fully encompasses what I'm actually asking for here.

But; to dive into the question at hand -
I'm looking to see if this is a sound concept; Effectively taking 4x PCIe Gen4 NVME SSDs and allowing them to be accessed over fewer, though higher bandwidth, PCIe Gen5 lanes --- saving on semi-valuable PCIe lanes, and on new hardware costs (as in my use case, I have 2 PCIe Gen 4 NVME SSDs as it stands). Instead of burning 16 lanes at Gen4 speeds, I could utilize 8 at Gen5 speeds and get the same (or, close enough) bandwidth to suck most of the performance out of the drives while not needing to shell out cash on Gen5 SSDs themselves.

The goal would not be a hardware raid controller; but to allow software direct access to the drives themselves. In my personal use case this *would* be an array of 4, 2TB NVME SSDs in Raid 0 for maximum speed, so a raid controller wouldn't be out of the question, however.


I've been made aware of the Aorus Gen5 AIC Adaptor; which seems almost exactly like what I'm interested in for this use case, but I have a few questions


1: Will this actually save me Gen5 lanes; or will they operate as Gen4 lanes and I'm still back at square 1 but with a fancy hunk of metal in the way
2: Is this even something reasonable to care about on Zen4 on an X670 boards, given the number of available general purpose PCIe lanes?
3: Am I absolutely mad for attempting something like this in the first place, or should I just get a larger, single Gen5 NVME SSD for a less 'messy' build?

You want both a board that supports bifurcation, as mentioned before, but also acts as a PCIe switch, pretty much like the one in the chipset of modern mobos (for example, the x570 chipsets is connected through PCIe 4.0 to the CPU and gives you extra PCIe 4/3/2 lanes). Find such board is hard, and using keywords such as "switch" on google brings up awful results, good luck with that. I believe that's only a thing in server motherboards, unless you're willing to design your own PCB for that.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

So what you're trying to do is trick 4 PCIE 4 lanes into running on 2 PCIE 5 Lanes.

 

AFAIK that tech doesn't exist.

 

You can bifurcate an x16 slot into 4x4 lanes for 4 drives, but that's about it.  There are rumors that PCIE 6 or 7 will include the option of x2 drive lanes (cuz they're gonna be bonkers fast) but that's not a thing yet.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, tkitch said:

So what you're trying to do is trick 4 PCIE 4 lanes into running on 2 PCIE 5 Lanes.

 

AFAIK that tech doesn't exist.

 

You can bifurcate an x16 slot into 4x4 lanes for 4 drives, but that's about it.  There are rumors that PCIE 6 or 7 will include the option of x2 drive lanes (cuz they're gonna be bonkers fast) but that's not a thing yet.

I'm not sure if one that does 5.0->4.0 exists, but 3.0 to 2.0 are a thing, like this one:

https://c-payne.com/products/pex-8747-plx-pcie-switch-card-x8x8x8x8-3w

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Besides bifurcation, there are pci-e switch chips, which take in a number of pci-e lanes and create a bunch of pci-e lanes that can be arranged in x1, x2,x4 and sometimes even more than x4.

PEX above is made by Broadcom (who bought the original company that made them  and raised the price several times as soon as they bought company) so those switches are now quite expensive.

There's a smaller company Pericom which makes switches, you'll find these chips in ebay products that take 1-4 pci-e lanes and make 2 pci-e x16 slots each with 1-4 pci-e lanes. Those chips are also expensive, in the 40-60$ each.

 

Microchip (mostly known for microcontrollers) makes switches, and they actually have a pci-e 5 switch chips, with up to 100 lanes in 52 ports, but they would be very expensive and I don't know what the deal is ... there's a huge amount of time from announcements until actual products are actually sold. For example they had a pci-e 3.0 or 4.0 - not sure now - switch released 2-3 years ago and the first actual product with such chip from a company was released a few months ago. They would also be super expensive.

 

Here's their pci-e 5.0 switch page : PCIe® Switches | Microchip Technology

 

The older generation pci-e 4.0 switches are super expensive as well ... for example their 28 lane (max 16 ports) switch PM40028 is 190$ each : https://www.digikey.com/en/products/detail/microchip-technology/PM40028B1-F3EI/14291782

 

Even if a company buys these in volume to make a pci-e x4/x8 ->  4-8 nvme M.2 slot adapter card, I doubt they'd get a price better than $100  .... and that would make such adapter cards sell for 150-200$ and most people won't pay that much for a product.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×