Jump to content

Intel Raid 0 Underperforming?

Go to solution Solved by NewMaxx,
1 hour ago, PittonKris said:

I don't have screenshots from before, but my speed hasn't really increased. It makes sense that I would be limited to a maximum of 3.9gbs per drive, however in raid 0 shouldn't my maximum be like almost 8gbs between two drives?

No, because they're both over the chipset which is just x4 PCIe 3.0 for all devices. One drive, ten drives, gonna have the same maximum sequential bandwidth in CDM. 4x PCIe 3.0 is at most 32 Gbps, but it's actually less than that due to the fact it's actually GT/s (gigatransfers - decimal, not binary), has 128b/130b encoding, plus overhead. It's more like 3550 MB/s total (writes have more overhead and are slower).

 

If you want it faster than that on a consumer board, you have to take lanes away from the GPU and/or go AMD. Again...chipset is only four lanes no matter what. Here is an example of my striped SN750s on X570.

Hey all. I have been running a Samsung 970 Evo 500 GB for a while and was inspired to try and run my os off of a raid 0 solution. So I purchased a Samsung 970 evo plus 500GB and followed a simple guide to put both drives in Intel raid 0 in my motherboard's bios, created a 1TB volume, and booted windows from it. Everything has worked pretty much as expected, but I haven't seen the raw performance I was expecting. In both CrystalDiskMark and Samsung Magician, at default settings, I was seeing numbers that surpassed the 970 Evo specs but didn't quite meet the 970 Evo plus specs. Regardless I think I should be expecting somewhere in the neighborhood of double the slowest drive speeds anyway. I'm not really sure why this raid volume is so slow. This is my first experience with raid, 

 

System specs: i7-8700k, Asus Prime Z390, 32 GB Trident Z RBG 3200 Mhz, GTX 1080, Intel Raid 0: Samsung Evo 500 Gb + Samsung Evo Plus 500 Gb. 

 

Suggestions or comments? 

Screenshot (5).png

Screenshot (4).png

Link to comment
Share on other sites

Link to post
Share on other sites

Why are you running 970s in RAID 0? I mean, you will NEVER notice the difference in real world use unless you're moving thousands of gigabytes. At this point, the bottleneck is probably the IOPs

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, PittonKris said:

Hey all. I have been running a Samsung 970 Evo 500 GB for a while and was inspired to try and run my os off of a raid 0 solution. So I purchased a Samsung 970 evo plus 500GB and followed a simple guide to put both drives in Intel raid 0 in my motherboard's bios, created a 1TB volume, and booted windows from it. Everything has worked pretty much as expected, but I haven't seen the raw performance I was expecting. In both CrystalDiskMark and Samsung Magician, at default settings, I was seeing numbers that surpassed the 970 Evo specs but didn't quite meet the 970 Evo plus specs. Regardless I think I should be expecting somewhere in the neighborhood of double the slowest drive speeds anyway. I'm not really sure why this raid volume is so slow. This is my first experience with raid, 

 

System specs: i7-8700k, Asus Prime Z390, 32 GB Trident Z RBG 3200 Mhz, GTX 1080, Intel Raid 0: Samsung Evo 500 Gb + Samsung Evo Plus 500 Gb. 

 

Suggestions or comments? 

Screenshot (5).png

Screenshot (4).png

I wouldn't have thought that RAID 0 of two 500GB 970 EVO's would be significantly faster that a stand-alone 1TB unit; on spinners it may have a better performance boost, but RAID levels of storage aren't something I have a lot of practical experience of.

 

I would also suggest that a RAID array for a boot drive isn't a wise idea... for bulk storage, yes...

I frequently edit any posts you may quote; please check for anything I 'may' have added.

 

Did you test boot it, before you built in into the case?

WHY NOT...?!

Link to comment
Share on other sites

Link to post
Share on other sites

It's pretty simple: all of Intel's M.2 sockets on consumer boards go over the chipset. Not only is this limited to x4 PCIe 3.0, it's shared with most other devices as well (any SATA devices for example, any items in non-GPU PCIe slots, etc). So you're bottlenecked by that. AMD boards have a dedicated M.2 socket using CPU lanes and further the X570 has twice the chipset bandwidth. It's also possible to take lanes from the GPU if you're so inclined.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, 5x5 said:

Why are you running 970s in RAID 0? I mean, you will NEVER notice the difference in real world use unless you're moving thousands of gigabytes. At this point, the bottleneck is probably the IOPs

Honestly I just wanted to see if I could. I wanted a fat 7gbs transfer that I could never actually use. Originally  I was running out of space so i started moving things to other drives and decided to expand and rather than get a separate drive to manage i made my C drive larger.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, NewMaxx said:

It's pretty simple: all of Intel's M.2 sockets on consumer boards go over the chipset. Not only is this limited to x4 PCIe 3.0, it's shared with most other devices as well (any SATA devices for example, any items in non-GPU PCIe slots, etc). So you're bottlenecked by that. AMD boards have a dedicated M.2 socket using CPU lanes and further the X570 has twice the chipset bandwidth. It's also possible to take lanes from the GPU if you're so inclined.

I don't have screenshots from before, but my speed hasn't really increased. It makes sense that I would be limited to a maximum of 3.9gbs per drive, however in raid 0 shouldn't my maximum be like almost 8gbs between two drives?

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Eighjan said:

I wouldn't have thought that RAID 0 of two 500GB 970 EVO's would be significantly faster that a stand-alone 1TB unit; on spinners it may have a better performance boost, but RAID levels of storage aren't something I have a lot of practical experience of.

 

I would also suggest that a RAID array for a boot drive isn't a wise idea... for bulk storage, yes...

Yeah I'm still chasing the theoretical maximum, and you are right it probably isn't wise.. but it is fun.. and I just make it a point to back up all data to my 8tb HDD. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, PittonKris said:

I don't have screenshots from before, but my speed hasn't really increased. It makes sense that I would be limited to a maximum of 3.9gbs per drive, however in raid 0 shouldn't my maximum be like almost 8gbs between two drives?

No, because they're both over the chipset which is just x4 PCIe 3.0 for all devices. One drive, ten drives, gonna have the same maximum sequential bandwidth in CDM. 4x PCIe 3.0 is at most 32 Gbps, but it's actually less than that due to the fact it's actually GT/s (gigatransfers - decimal, not binary), has 128b/130b encoding, plus overhead. It's more like 3550 MB/s total (writes have more overhead and are slower).

 

If you want it faster than that on a consumer board, you have to take lanes away from the GPU and/or go AMD. Again...chipset is only four lanes no matter what. Here is an example of my striped SN750s on X570.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×