Jump to content

Raid 0 960 Pro NVME vs Intel Vroc Speed

Needing clarification on this topic. Read speeds top out at 3500 on Samsung nvme 960 pro with just one card or two cards in Raid 0, absolutely no performance gain on reads using raid 0. Wrights get a bit of a boost but also hit a speed wall. 

 

The much slower Intel m.2 nvme cards using Vroc scale up to extremely fast speeds as a boot drive, granted you need to buy at least 4 of these slow expensive drives to match a 960 pro.

 

So my question is can a mother board with the option for nvme raid 0 ever scale up past the 3500 threshold? Why the bottle neck?  Are there planes from any manufacture to unlock the super speed potential of a 960 pro in raid 0?  What happens to my sata lanes if I fill all 3 m.2 slots, I have heard they shut off? Yes the 960 pro is super fast already but Intel vroc is able to blow right past them in raid 0. 

Link to comment
Share on other sites

Link to post
Share on other sites

The bottleneck is the 4 PCIe lanes from the chipset to the CPU. Four lanes of PCIe 3.0 is limited to roughly 4GB/s (roughly 1GB/s per lane), factor in overhead and whatnot due to encoding and I can see them topping out around 3.5GB/s

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

hmm so I wounder why intel can by pass this bottle neck? Those promo videos on you tube show almost 11 gigs. It seems like its only done with intel product. But those drive are super lame at sata wright speeds. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Brock Samson Killer said:

hmm so I wounder why intel can by pass this bottle neck? Those promo videos on you tube show almost 11 gigs. It seems like its only done with intel product. But those drive are super lame at sata wright speeds. 

Depends on where the lanes are coming from and how they are configured. If they are using PCIe 3.0 lanes direct to the CPU and not through the chipset then you can get much higher throughput since a PCIe 3.0 x16 slot on the board that's wired at x16 will give the full bandwidth.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Lurick said:

Depends on where the lanes are coming from and how they are configured. If they are using PCIe 3.0 lanes direct to the CPU and not through the chipset then you can get much higher throughput since a PCIe 3.0 x16 slot on the board that's wired at x16 will give the full bandwidth.

Direct to Cpu must be what they are doing. Its Intel only, their drives suck and cost as much as 960 pro, insert sad face here.  

Link to comment
Share on other sites

Link to post
Share on other sites

I believe that there is more to it than just the number of lanes. The PCH chip itself must be introducing some latency as well. I just did a comparison using CDM of an Intel 600p and a Sammy 960 Evo when they're running directly off CPU lanes and when they're going through the PCH.  

 

pchcpulanesnvme.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×