Jump to content

3 M.2 NVMe SSD in Raid 0 - Better REAL WORLD Performance or Not & How Much?

Hello All,

 

First post on these forums so I will make this short, sweet, and to the point. Take three samsung 960 evos, with read speeds of 3000MB/s+ & write speeds of ~~2000MB/s. It has been shown that putting 3 of these storage devices in raid 0 configuration does indeed increase the performance of read/write speeds in synthetic benchmarks, but my question is - how much real world performance increase do we see? I'm talking numbers here.

 

I have read multiple statements on different forums, some saying there will be next to zero performance increase for real world applications and others saying there should be tons. Can someone throw down some experimental data that can confirm one side or the other for this? Any help would be greatly appreciated. Real world applications I'm interested in falling into the range of consumer - not industrial applications.

 

Yea I've seen people say that roughly around 3900MB/s read is the limit for this kind of thing as of right now on most builds - something to do with maximum PCI bus speeds or maximum bandwidth the current intel chipsets can handle - correct me if I'm not citing this quite right, can any professional comment on the validity of this?

Link to comment
Share on other sites

Link to post
Share on other sites

Unless you're trying to do dual 10GbE to more than two peers, don't try it. 

 

Edit: forgot to mention that you would have to be delivering terabytes for this to be economically useful.

Cor Caeruleus Reborn v6

Spoiler

CPU: Intel - Core i7-8700K

CPU Cooler: be quiet! - PURE ROCK 
Thermal Compound: Arctic Silver - 5 High-Density Polysynthetic Silver 3.5g Thermal Paste 
Motherboard: ASRock Z370 Extreme4
Memory: G.Skill TridentZ RGB 2x8GB 3200/14
Storage: Samsung - 850 EVO-Series 500GB 2.5" Solid State Drive 
Storage: Samsung - 960 EVO 500GB M.2-2280 Solid State Drive
Storage: Western Digital - Blue 2TB 3.5" 5400RPM Internal Hard Drive
Storage: Western Digital - BLACK SERIES 3TB 3.5" 7200RPM Internal Hard Drive
Video Card: EVGA - 970 SSC ACX (1080 is in RMA)
Case: Fractal Design - Define R5 w/Window (Black) ATX Mid Tower Case
Power Supply: EVGA - SuperNOVA P2 750W with CableMod blue/black Pro Series
Optical Drive: LG - WH16NS40 Blu-Ray/DVD/CD Writer 
Operating System: Microsoft - Windows 10 Pro OEM 64-bit and Linux Mint Serena
Keyboard: Logitech - G910 Orion Spectrum RGB Wired Gaming Keyboard
Mouse: Logitech - G502 Wired Optical Mouse
Headphones: Logitech - G430 7.1 Channel  Headset
Speakers: Logitech - Z506 155W 5.1ch Speakers

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, ARikozuM said:

Unless you're trying to do dual 10GbE to more than two peers, don't try it. 

 

Edit: forgot to mention that you would have to be delivering terabytes for this to be economically useful.

Hi ARikozuM - this goes over my head at this point - I'm speaking more towards consumer purposes & day to day usage of an average pc user

Link to comment
Share on other sites

Link to post
Share on other sites

You will get bottlenecked by the DMI that connects the PCH (which these connect to) to the CPU. Its bandwidth is equivalent to PCIe 3.0 x 4.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, robedude said:

Hi ARikozuM - this goes over my head at this point - I'm speaking more towards consumer purposes & day to day usage of an average pc user

 

8 minutes ago, ARikozuM said:

Unless you're trying to do dual 10GbE to more than two peers, don't try it. 

Read it. 

 

I understand that you won't do this and that this is merely theoretical, but the money, time, and effort would be in vain unless you're putting it to use.

 

Edit: Unless you're trying to run Skyrim with every mod under the sun at 4K resolution and hyper-realistic physics.

Cor Caeruleus Reborn v6

Spoiler

CPU: Intel - Core i7-8700K

CPU Cooler: be quiet! - PURE ROCK 
Thermal Compound: Arctic Silver - 5 High-Density Polysynthetic Silver 3.5g Thermal Paste 
Motherboard: ASRock Z370 Extreme4
Memory: G.Skill TridentZ RGB 2x8GB 3200/14
Storage: Samsung - 850 EVO-Series 500GB 2.5" Solid State Drive 
Storage: Samsung - 960 EVO 500GB M.2-2280 Solid State Drive
Storage: Western Digital - Blue 2TB 3.5" 5400RPM Internal Hard Drive
Storage: Western Digital - BLACK SERIES 3TB 3.5" 7200RPM Internal Hard Drive
Video Card: EVGA - 970 SSC ACX (1080 is in RMA)
Case: Fractal Design - Define R5 w/Window (Black) ATX Mid Tower Case
Power Supply: EVGA - SuperNOVA P2 750W with CableMod blue/black Pro Series
Optical Drive: LG - WH16NS40 Blu-Ray/DVD/CD Writer 
Operating System: Microsoft - Windows 10 Pro OEM 64-bit and Linux Mint Serena
Keyboard: Logitech - G910 Orion Spectrum RGB Wired Gaming Keyboard
Mouse: Logitech - G502 Wired Optical Mouse
Headphones: Logitech - G430 7.1 Channel  Headset
Speakers: Logitech - Z506 155W 5.1ch Speakers

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, robedude said:

Hi ARikozuM - this goes over my head at this point - I'm speaking more towards consumer purposes & day to day usage of an average pc user

You should quote people instead. Otherwise, if your gaming, watch this video:

 

PSU Nerd | PC Parts Flipper | Cable Management Guru

Helpful Links: PSU Tier List | Why not group reg? | Avoid the EVGA G3

Helios EVO (Main Desktop) Intel Core™ i9-10900KF | 32GB DDR4-3000 | GIGABYTE Z590 AORUS ELITE | GeForce RTX 3060 Ti | NZXT H510 | EVGA G5 650W

 

Delta (Laptop) | Galaxy S21 Ultra | Pacific Spirit XT (Server)

Full Specs

Spoiler

 

Helios EVO (Main):

Intel Core™ i9-10900KF | 32GB G.Skill Ripjaws V / Team T-Force DDR4-3000 | GIGABYTE Z590 AORUS ELITE | MSI GAMING X GeForce RTX 3060 Ti 8GB GPU | NZXT H510 | EVGA G5 650W | MasterLiquid ML240L | 2x 2TB HDD | 256GB SX6000 Pro SSD | 3x Corsair SP120 RGB | Fractal Design Venturi HF-14

 

Pacific Spirit XT - Server

Intel Core™ i7-8700K (Won at LTX, signed by Dennis) | GIGABYTE Z370 AORUS GAMING 5 | 16GB Team Vulcan DDR4-3000 | Intel UrfpsgonHD 630 | Define C TG | Corsair CX450M

 

Delta - Laptop

ASUS TUF Dash F15 - Intel Core™ i7-11370H | 16GB DDR4 | RTX 3060 | 500GB NVMe SSD | 200W Brick | 65W USB-PD Charger

 


 

Intel is bringing DDR4 to the mainstream with the Intel® Core™ i5 6600K and i7 6700K processors. Learn more by clicking the link in the description below.

Link to comment
Share on other sites

Link to post
Share on other sites

No difference at all.

even NVME and SATA SSDs aren't really different for everyday use and gaming, so RAID 0 of the best NVME SSD won't make a difference in any usage scenario except for one involving transferring large files fast.

QUOTE/TAG ME WHEN REPLYING

Spend As Much Time Writing Your Question As You Want Me To Spend Responding To It.

If I'm wrong, please point it out. I'm always learning & I won't bite.

 

Desktop:

Delidded Core i7 4770K - GTX 1070 ROG Strix - 16GB DDR3 - Lots of RGB lights I never change

Laptop:

HP Spectre X360 - i7 8560U - MX150 - 2TB SSD - 16GB DDR4

Link to comment
Share on other sites

Link to post
Share on other sites

I put the first generation of Sata based SSD in RAID0 when they first came out and nothing at that time benefited. this was over 5 years ago.

RAID0 benefits mechanical drives because it theoretically doubles the random access times.

SSD already have incredibly low access times so whatever benefit there is from SSD RAID0 is not as noticable as mechanical drives.

             ☼

ψ ︿_____︿_ψ_   

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, DeadEyePsycho said:

You will get bottlenecked by the DMI that connects the PCH (which these connect to) to the CPU. Its bandwidth is equivalent to PCIe 3.0 x 4.

OK so this being the case, it would still be possible to perhaps try RAID 0 for some older NVME drives that may perhaps be cheaper, put them in RAID 0, and then hope that we get as close to possible as the cap without going over to maximize performance?

11 minutes ago, RadiatingLight said:

No difference at all.

even NVME and SATA SSDs aren't really different for everyday use and gaming, so RAID 0 of the best NVME SSD won't make a difference in any usage scenario except for one involving transferring large files fast.

Makes sense to me. The speeds we are talking about now are so ridiculously fast that in almost every consumer application they are much beyond what is needed and there are other bottlenecks at play then (such as game load times or OS boot times).

 

My question now becomes: Why are people even considering M.2 as a useful investment? Does M.2 storage offer superior performance over SATA based storage that is worth the price for the average consumer?

 

EDIT: Perhaps form factor/size and convenience may be an important player here as opposed to just raw performance

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, robedude said:

OK so this being the case, it would still be possible to perhaps try RAID 0 for some older NVME drives that may perhaps be cheaper, put them in RAID 0, and then hope that we get as close to possible as the cap without going over to maximize performance?

 

One drive gets pretty close by itself, two in RAID 0 will definitely cap it.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, robedude said:

My question now becomes: Why are people even considering M.2 as a useful investment? Does M.2 storage offer superior performance over SATA based storage that is worth the price for the average consumer?

No. It does not provide speeds that are worth it. this is why the overwhelming majority of people are still using a SATA SSD.

 

But, the price premium associated with NVME is going away, and NVME drives are simply cool to have.

people who buy NVME drives either need them for stuff like 8K video editing with realtime timeline scrubbing, or they just want bragging rights. if paying an extra $10-30 gives you double the read/write speeds, even if it doesn't give a noticeable real-world boost, it's still cool enough to spend a bit more to get.

QUOTE/TAG ME WHEN REPLYING

Spend As Much Time Writing Your Question As You Want Me To Spend Responding To It.

If I'm wrong, please point it out. I'm always learning & I won't bite.

 

Desktop:

Delidded Core i7 4770K - GTX 1070 ROG Strix - 16GB DDR3 - Lots of RGB lights I never change

Laptop:

HP Spectre X360 - i7 8560U - MX150 - 2TB SSD - 16GB DDR4

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, robedude said:

I have read multiple statements on different forums, some saying there will be next to zero performance increase for real world applications and others saying there should be tons

It depends entirely on your applications. If you're a basic web browser kind of guy or gaming there won't any tangible benefits to be had as far as that goes. now then if you're the kind of guy who does professional work with large files, and often imports/exports numerous files into various rendering and production programs or deal with frequent local network level file transfers (assuming your network can handle these kinds of speeds without slowing you down) then yes, the increased speeds can definitely make a noticeable/measurable difference in workflow... but this sort of thing wouldn't apply to most people, and would never apply to a casual user or home PC scenario.


 

 

1 hour ago, robedude said:

Yea I've seen people say that roughly around 3900MB/s read is the limit for this kind of thing as of right now on most builds - something to do with maximum PCI bus speeds or maximum bandwidth the current intel chipsets can handle - correct me if I'm not citing this quite right, can any professional comment on the validity of this?

This hard limit is caused by a PCIe lane limit due to the way your chipset typically handles NVMe storage devices via DMI. Now then, the theoretical limitations of that is about 4GBps (32Gbps)... but the real world limit that most people run into is actually approximately 3.5GBps sequential speeds. now, as far as Random read/writes (IOPS) go, these are different than sequential speeds, and do not have a hard limit the same way that your raw bandwidth does. Typically speaking your IOPS will scale better with additional drives compared to normal sequential speeds (which usually suffer extreme diminishing returns beyond 2-3 drives in RAID 0). IOPS performance is also the kind of thing that you can feel in day-to-day performance. wether its booting up your machine, loading a web page, or anything like that... where you're loading many small individual things rather than loading single massive files. Basically this will result in your entire experience feeling so much more "snappy" or responsive. now, its going to be marginal improvements (for example, if it takes 0.5 seconds to open a browser on an NVMe drive, would you really care if that now takes 0.3 seconds with RAID 0?) but it IS going to be there, and you ARE likely going to feel it.





ALLL THAT BEING SAID! You CAN get around that 4GBps sequential bandwidth wall if you use your primary CPU PCIe lanes and do not rely upon the DMI/chipset PCIe lanes. Installing your M.2 drives on riser cards in your primary motherboard PCIe slots (rather than your on board designated M.2 slots) and then having these drives/ports configured correctly will allow you to break the 4 lane limit that the motherboard holds you to. the benefits of doing it this way is that it allows you to get a lot more performance out of your NVMe drives. the downside is that you have fewer lanes to use for other things that normally run off of your CPU  (such as your GPU). Unfortunately I do not have first hand experience setting things up this way so I can't be of much more help than this, but I do know I've seen multiple other sources doing it this way to get around that DMI wall (this was popular on the first X99 motherboards since they had a mere 2GBps DMI limit, so when the first NVMe drives hit the market it already was bottlenecking their single drives, let alone people who wanted to RAID), so it shouldn't be too hard to find a guide/walkthrough on how to do it assuming that is soemthing you're interested in. I would not recommend doing this on a 16 lane CPU (such as what LGA1151 has) as you do not really have a whole lot of lanes to go around even before you add extra devices.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Zyndo said:

It depends entirely on your applications. If you're a basic web browser kind of guy or gaming there won't any tangible benefits to be had as far as that goes. now then if you're the kind of guy who does professional work with large files, and often imports/exports various files into various rendering and production programs or deal with frequent local network level file transfers (assuming your network can handle these kinds of speeds without slowing you down) then yes, the increased speeds can definitely make a noticeable/measurable difference in workflow... but this sort of thing wouldn't apply to most people, and would never apply to a casual user or home PC scenario.


 

 

This hard limit is caused by a PCIe lane limit due to the way your chipset typically handles NVMe storage devices via DMI. Now then, the theoretical limitations of that is about 4GBps (32Gbps)... but the real world limit that most people run into is actually approximately 3.5GBps sequential speeds. now, as far as Random read/writes (IOPS) go, these are different than sequential speeds, and do not have a hard limit the same way that your raw bandwidth does. Typically speaking your IOPS will scale better with additional drives compared to normal sequential speeds (which usually suffer extreme diminishing returns beyond 2-3 drives in RAID 0). IOPS performance is also the kind of thing that you can feel in day-to-day performance. wether its booting up your machine, loading a web page, or anything like that... where you're loading small individual things rather than loading single massive files. Basically this will result in your entire experience feeling so much more "snappy" or responsive. now, its going to be marginal improvements (for example, if it takes 0.5 seconds to open a browser on an NVMe drive, would you really care if that now takes 0.3 seconds with RAID 0?) but it IS going to be there, and you ARE likely going to feel it.





ALLL THAT BEING SAID! You CAN get around that 4GBps sequential bandwidth wall if you use your primary CPU PCIe lanes and do not rely upon the DMI/chipset PCIe lanes. Installing your M.2 drives on riser cards in your primary motherboard PCIe slots (rather than your on board designated M.2 slots) and then having these drives configured correctly will allow you to break the 4 lane limit that the motherboard holds you to. the benefits of doing it this way is that it allows you to get a lot more performance out of your NVMe drives. the downside is that you have fewer lanes to use for other things that normally run off of your CPU  (such as your GPU). Unfortunately I do not have first hand experience setting things up this way so I can't be of much more help than this, but I do know I've seen multiple other sources doing it this way to get around that DMI wall (this was popular on the first X99 motherboards since they had a mere 2GBps DMI limit, so when the first NVMe drives hit the market it already was bottlenecking their single drives, let alone people who wanted to RAID), so it shouldn't be too hard to find a guide/walkthrough on how to do it assuming that is soemthing you're interested in. I would not recommend doing this on a 16 lane CPU (such as LGA1151 has) as you do not really have a whole lot of lanes to go around even before you add extra devices.

Awesome - thanks for the detailed reply. Its all very interesting really. I'll have to do more research on this and see. These NVMe drives are amazing as far as the maximum potential goes, the problem becomes harnessing ALL the power!

 

Anyways, thanks for all the great/thoughtful answers so far guys! I think I like these forums haha.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×