Jump to content

What is the maximum number of NVMe drives?

OuterMarker

Planning a future AMD 5950x build and I having a hard time wrapping my head around PCIe lane usage.

 

All the common articles address normal PC builds, but I am planning a unique build. It is a gaming machine that is also used for photo and video editing. My current build has much more sensible storage options (a mix of NVMe and SATA drives), but I've gotten to the point where I have too many drives which is making it a bit of a challenge to find stuff. While a 1TB drive seemed good for photo editing a few years ago, now I have a 1TB drive and a 3TB drive for photos. 

 

On the new build, I want to max out the NVMe drives but I can't figure out how to count PCIe lanes. Ideally, I would like an NVMe on the motherboard, a GPU, and 4 NVMes in a volume attached to an expansion card in a PCIe slot. Does running running all 4 drives in 1 volume reduce the amount of lanes I'm using? Am I safe to use gen 4 drives or will that be a bottleneck? I know there are cheaper/more balanced solutions, but that isn't my concern. I want a solution with the best performance that is simple (ie not a ton of different drives) and expandable (right now, all my SATA ports are occupied).

 

And yes, I do have a NAS that I back everything up on daily, but I like to work on projects locally.

Link to comment
Share on other sites

Link to post
Share on other sites

I have 8 nVME m.2 drives in my system via 2 PCIe card holding 4 each (I am also a photographer)

but I use Xeon CPUs with crazy amounts of PCIe lanes.

 

NOTE: I no longer frequent this site. If you really need help, PM/DM me and my e.mail will alert me. 

Link to comment
Share on other sites

Link to post
Share on other sites

The 5950X comes with 24 PCIe lanes, 4 of which are dedicated to the chipset. That leaves 20 lanes for everything else. 

 

Your GPU takes up 16 lanes by default, but can usually run with 8 lanes with minimal to no performance penalty depending on the workload, so that will leave you with 12 direct from CPU lanes left. Most fast NVMe drives use 4 PCIe lanes, so that would mean you could have 3 NVMe drives directly running off the CPU at full speed. However, the lanes dedicated to the chipset can be switched, so depending on the motherboard you can get even more, getting you 3 drives running at full speed and 1-2 drives running at 50%-90% of full speed depending on the drive and how much is going over the USB bus. 

 

To answer some of your more specific question:

1 hour ago, OuterMarker said:

Does running running all 4 drives in 1 volume reduce the amount of lanes I'm using?

No. Drives will use the same amount of lanes at all time, and if you want full speed, they will need to be at their full bandwidth.

1 hour ago, OuterMarker said:

Am I safe to use gen 4 drives or will that be a bottleneck?

You can use gen 4 drives, but those drives are too fast for their own good, and unless you're doing a ton of throughput on them (and by a ton, I mean regularly reading/writing 100GB+ multiple times a day), you're likely to never come close to their full potential. The only argument that could be made for using gen4 drives is that they can be running with 2 lanes at the same speeds as gen 3 drives, but you'd still be spending double on a drive for the same performance. 

1 hour ago, OuterMarker said:

I want a solution with the best performance that is simple (ie not a ton of different drives) and expandable (right now, all my SATA ports are occupied).

The solution I'd probably recommend for this isn't to do a ton of NVMe drives in a RAID like it sounds like you're trying to do. I would buy an HBA and use a bunch of SATA drives. It's more cost effective, much easier to setup and use, and the performance is going to be practically identical. 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, RONOTHAN## said:

Your GPU takes up 16 lanes by default, but can usually run with 8 lanes with minimal to no performance penalty depending on the workload, so that will leave you with 12 direct from CPU lanes left. Most fast NVMe drives use 4 PCIe lanes, so that would mean you could have 3 NVMe drives directly running off the CPU at full speed.

I guess what I don't understand about PCIe lanes is are they allotted by the physical hardware or are the configured based on the system need. They chances of me using all the lanes for the GPU and all the lanes for my drives are 0%. So does that mean as long as I don't use a lot of lanes for my drives my GPU will have all 16 lanes, or is the the fact I have 4 NVMes in a PCI expansion slot mean that the GPU will always be restricted 8 lanes even when those drives are't in use?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, OuterMarker said:

I guess what I don't understand about PCIe lanes is are they allotted by the physical hardware or are the configured based on the system need. They chances of me using all the lanes for the GPU and all the lanes for my drives are 0%. So does that mean as long as I don't use a lot of lanes for my drives my GPU will have all 16 lanes, or is the the fact I have 4 NVMes in a PCI expansion slot mean that the GPU will always be restricted 8 lanes even when those drives are't in use?

Lanes are dished out by the motherboard depending on how everything is plugged into it, and are fixed in place for how many a device has on boot. Technically, there is something called a PLX chip that can do kinda what you're talking about, give different amounts of bandwidth to a device depending on how much is being called for, but they're very expensive to add to a board, and I don't know of any X570 boards that come with them since 3 and 4 way SLI went the way of the Dodo and because they do add unnecessary latency. Also, with a PLX chip, both devices will think they've got access to say 16 lanes of traffic if the PLX chip has that amount of bandwidth, but only one of them can use that bandwidth at a time. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×