Jump to content

help. slow raid 0 speeds with nvme drives

Valkyrie743

hey all. i have a gigabyte z390 master board and 2 SK hynix Gold P31 1TB drives  in raid 0 and im getting speeds the same as if i was just using one drive?   is this a limitation with z390 or my motherboard? i would think i would be hitting 5gb read and write while in raid 0. 

fxPjoPX.png

Link to comment
Share on other sites

Link to post
Share on other sites

Thats a DMI limit on z390. The DMI is basically a pcie gen 3 x4 link, so running drives in raid won't help Sequential speeds.

 

But random io at higher queue depths is still higher, but this own't matter

 

But even a single drive is still more than fast enough for almost all tasks.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Electronics Wizardy said:

Thats a DMI limit on z390. The DMI is basically a pcie gen 3 x4 link, so running drives in raid won't help Sequential speeds.

 

But random io at higher queue depths is still higher, but this own't matter

 

But even a single drive is still more than fast enough for almost all tasks.

 

oh well that blows 😞   well i already had 1 of the drives and bought another for more game and adobe lightroom raw image storage. i wanted to have the drives show as 1 drive  rather than 2 1tb drives and figured i would get free performance out of raid 0.

  


 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Valkyrie743 said:

 

oh well that blows 😞   well i already had 1 of the drives and bought another for more game and adobe lightroom raw image storage. i wanted to have the drives show as 1 drive  rather than 2 1tb drives and figured i would get free performance out of raid 0.

  


 

Yea its still gonna be faster as random io is faster, and the sequentical speeds don't really matter for most uses. Id keep the raid 0, why not.

 

Id also use storage spaces in windows for raid, the raid built into the motherboard sucks and is much worse.

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, Electronics Wizardy said:

Yea its still gonna be faster as random io is faster, and the sequentical speeds don't really matter for most uses. Id keep the raid 0, why not.

 

Id also use storage spaces in windows for raid, the raid built into the motherboard sucks and is much worse.

 

why is raid that's built in the motherboard bad?  

i hear there is many issues with games when using storage spaces so i avoided using it

Link to comment
Share on other sites

Link to post
Share on other sites

Mobo RAID isn't bad, that's an urban legend/outdated info people keep spreading for some reason. 

 

 

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Valkyrie743 said:

 

why is raid that's built in the motherboard bad?  

i hear there is many issues with games when using storage spaces so i avoided using it

What issues do you hear about storage spaces? I have used storage spaces for all my games without issues. It shows up like any other drive in windows.

 

8 hours ago, Kilrah said:

Mobo RAID isn't bad, that's an urban legend/outdated info people keep spreading for some reason. 

 

 

Motherboard raid really isn't great. It still uses cpu power, It doesn't handle failure well, there are weid bugs, Its harder to move between systems. Its less flexible.

 

But also why use it when there is a much better solution built into windows already?

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Electronics Wizardy said:

there are weid bugs

I've always read about people saying "there are weird bugs", but nobody actually reporting any or explaining what...

 

19 hours ago, Electronics Wizardy said:

Its harder to move between systems

I've moved mine between machines with no issues

 

19 hours ago, Electronics Wizardy said:

But also why use it when there is a much better solution built into windows already?

Why use a Windows-specific solution when there's a universal low-level one that doesn't depend on OS support?

I can boot linux and access my mobo RAID...

 

Been using it for close to a decade on several machines (all Intel though, don't know about how AMD's stuff is in comparison) and it's always worked fine.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kilrah said:
23 hours ago, Electronics Wizardy said:

 

I've always read about people saying "there are weird bugs", but nobody actually reporting any or explaining what...

 

THings like handing hard power offs, surprise removal, drives that aren't in sync and lots of small issues that hardware raid card and good software raid handle better

 

4 hours ago, Kilrah said:

've moved mine between machines with no issues

 

But thats intel only, you can move storage spaces to any windows 8+ system just fine. You can also move the array to a vm if you want. 

 

4 hours ago, Kilrah said:

Been using it for close to a decade on several machines (all Intel though, don't know about how AMD's stuff is in comparison) and it's always worked fine.

Yea the amd one is a bit worse, and a lot more hacky currently, esp for things like nvme raid.

 

The other big thing is storage spaces has tons of features and is much better at keeping data safe than motherboard raid. Things like data checksumming, mixed raid levels, ssd tiering, good command line utilities and many other features. I just don't see a reason to use motherboard raid when windows has a better way to do it built in.

Link to comment
Share on other sites

Link to post
Share on other sites

Well it's more powerful for sure, but none of that matters if you just want to do a basic RAID0 like OP asked and which is what I'm using it for as well. 

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

Mobo RAID is softRAID. You can move it between systems with the same storage controller (e.g. Intel to Intel) but Windows-based is more universal (but not for booting). Storage Spaces is actually flexible, but requires PowerShell knowledge to get the most out of it. In any case, RAID-0 will help with IOPS at higher QD/T even if you're limited in sequentials, but neither are really beneficial for the average person. With SSDs specifically you might be misled on sequentials, anyway, because of SLC caching. For example, I have dual 1TB SN750s in a RAID-0 and the actual sustained TLC speed (and if you're in a RAID, are you NOT looking for sustained performance?) is 3 GB/s or less. So that giant 6.5-7.0 GB/s from SLC doesn't mean anything anyway - especially as you'd have to be moving to yet another fast array/drive to make the most use of it.

 

(the 1TB P31 has a long-tail seq. TLC write speed of around 1.7 GB/s, but two of them still won't quite exceed x4 PCIe 3.0)

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Kilrah said:

Well it's more powerful for sure, but none of that matters if you just want to do a basic RAID0 like OP asked and which is what I'm using it for as well. 

One thing you can do with software raid here is have one drive on the cpu lanes and one on the chipset lanes, giving you the  full speed boost, intel raid needs all nvme drives connected to the chipset(unless your using vroc, which I don't think the 115x or 1200 platform support. But for OPs use probably not worth getting a pcie to m.2 adapter.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yep, you can do cross-RAID (PCH + CPU), although the RAID will be as slow as the slowest drive and the PCH does add a bit of latency. You can of course also take GPU lanes for an adapter to get more bandwidth, for example 1x8/2x4 with a GPU and ASUS Hyper, if your motherboard supports PCIe bifurcation.

Link to comment
Share on other sites

Link to post
Share on other sites

well i found a PDF of my motherboards PCI lanes and the only lanes that go direct to the cpu are the first 2 PCI 16x slots. first slot is 16X. if second one is filled it will run as 8X 8X or  8X 4X  

im trying to find a PCI riser card that has slows for 2 nvme drives. so far i only found one and its 150 bucks. which i dont understand why being that you can get single nvme to pci 4x risers for cheap.  figured i would find a riser card that has 2 nvme pci slots to pci 8X  

if i found one i could run the raid from there and not have the chipset bottleneck performance but i would have to run my 2080 Ti @ 8X not 16X. from what i've read its not really a big deal. at most i would drop 3% give or take performance from my gpu.  

 

Link to comment
Share on other sites

Link to post
Share on other sites

Running the RAID from the PCIe lanes instead of the chipset would be as minimal of a difference than dropping the GPU to 8x. 

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Valkyrie743 said:

well i found a PDF of my motherboards PCI lanes and the only lanes that go direct to the cpu are the first 2 PCI 16x slots. first slot is 16X. if second one is filled it will run as 8X 8X or  8X 4X  

im trying to find a PCI riser card that has slows for 2 nvme drives. so far i only found one and its 150 bucks. which i dont understand why being that you can get single nvme to pci 4x risers for cheap.  figured i would find a riser card that has 2 nvme pci slots to pci 8X  

if i found one i could run the raid from there and not have the chipset bottleneck performance but i would have to run my 2080 Ti @ 8X not 16X. from what i've read its not really a big deal. at most i would drop 3% give or take performance from my gpu.  

 

If your motherboard lacks PCIe bifurcation support you have to get an adapter which has a RAID controller or switch on-PCB, which is a lot more expensive. And correct, you likely won't see a large FPS drop.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, NewMaxx said:

If your motherboard lacks PCIe bifurcation support you have to get an adapter which has a RAID controller or switch on-PCB, which is a lot more expensive. And correct, you likely won't see a large FPS drop.

i have a gigabyte z390 aorus master. how can i tell if my motherboard supports PCIe bifurcation

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Valkyrie743 said:

i have a gigabyte z390 aorus master. how can i tell if my motherboard supports PCIe bifurcation

https://www.asus.com/us/support/FAQ/1037507/

 

Read the Z390 parts carefully...

 

"Only Intel SSDs can active Intel RAID on CPU function in Intel platform."

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, NewMaxx said:

https://www.asus.com/us/support/FAQ/1037507/

 

Read the Z390 parts carefully...

 

"Only Intel SSDs can active Intel RAID on CPU function in Intel platform."

oh that's dumb 😞  well ill just live with what i have. its not like i need the extra speed just thought it would be a free bonus. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×