Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Logankilby

How many Storage devices can I put in a raid 0 configuration?

Recommended Posts

Posted · Original PosterOP

Theoretically, Is there a limit to how many drives I can put into a raid 0 zero configuration with an ssd cache? Is there a limit or can I have an infinite amount of hard drives?

Link to post
Share on other sites

in theory, assuming you can store, power etc all the drives and you have enough space on your controller/controllers it is limitless however with raid 0 being non-redundant you wouldn't want to put too many drives as it vastly increases your chance of a failure 


       My Pc Specs

 

  • Ryzen 3 2200g
  • MSI X470 Plus MAX
  • CORSAIR VENGANCE LPX DDR4 2x8GB @2933MHz
  • Radeon Vega 8 Graphics
  • NZXT H511 White
  • Crucial BX500 240GB SSD (2x) & WD Blue 1TB 7200RPM
  • Corsair RM850x (2018)
  • Wraith Stealth Cooler
Link to post
Share on other sites
5 minutes ago, Unstoppablechicken said:

Uh fair but what about raid 5?

Same thing. Unlimited in theory, limited by how many drive you can fit inside your PC.


Remember to quote or @mention others, so they are notified of your reply

Link to post
Share on other sites

Well... as it is controlled by the controller (I know that sounds ridiculous) it would be absolutely up to the controller.  But, Raid 0 has diminishing returns as there still has to be a controller to sort out what data comes from where.  Two drives does not perform at 200% the speed of a single drive. and with three drives, you will be lucky to get 250% the speed of a single drive.  There would be no difference (except reliability) between 6 ad 7 drives.


I'm very generous with the likes I give out on here; you should be, too.

 

I never believed in orthopedic inserts, but then I got a pair.  Now, I stand corrected. 

 

My friend gave me his Epi-pen as he was dying.  It seemed very important to him that I have it.

 

Why do cows wear bells?

Because their horns don't work.

Link to post
Share on other sites
1 minute ago, Eigenvektor said:

Same thing. Unlimited in theory, limited by how many drive you can fit inside your PC.

RAID 5 is where it's at.  I don't know why it isn't the standard for upper level consumers.


I'm very generous with the likes I give out on here; you should be, too.

 

I never believed in orthopedic inserts, but then I got a pair.  Now, I stand corrected. 

 

My friend gave me his Epi-pen as he was dying.  It seemed very important to him that I have it.

 

Why do cows wear bells?

Because their horns don't work.

Link to post
Share on other sites
7 minutes ago, shoutingsteve said:

Well... as it is controlled by the controller (I know that sounds ridiculous) it would be absolutely up to the controller.  But, Raid 0 has diminishing returns as there still has to be a controller to sort out what data comes from where.  Two drives does not perform at 200% the speed of a single drive. and with three drives, you will be lucky to get 250% the speed of a single drive.  There would be no difference (except reliability) between 6 ad 7 drives.

Realistically, it's much lower.  I use Raid O for my 2 860 Evo's and in real world gaming I see a 25-30% faster load time vs just a single 860 in long load time games (heavily modded games where load times are in minutes rather than seconds).

 

Would I do it again?  Sure, but is it really worth it?  Depends on the user and use-case.


Emma : i7 8700K @5.0Ghz - Gigabyte AORUS z370 Gaming 5 - ThermalTake Water 3.0 Ultimate - G. Skill Ripjaws V 32GB 3200Mhz - Gigabyte AORUS 1080Ti - 750 EVO 512GB + 2x 860 EVO 1TB M.2 (RAID 0) - EVGA Supernova 650 P2 - Fractal Design Define R6 - AOC AGON 35" 3440x1440 100Hz - Mackie CR5BT - Logitech G910, G502, G933 - Cooler Master Universal Graphics Card Holder

 

Plex : Ryzen 5 1600 (stock) - Gigabyte B450M-DS3H - G. Skill Ripjaws V 8GB 2400Mhz - GeForce 8800GTS 640MB - 840 EVO 256GB + Toshiba P300 3TB - TP-Link AC1900 PCIe Wifi - Cooler Master MasterWatt Lite 500 - Antec Nine Hundred - Dell 19" 4:3

 

Lenovo 720S Touch 15.6" - i7 7700HQ, 16GB RAM, 512GB NVMe SSD, 1050Ti, 4K touchscreen

 

MSI GF62 - i7 7700HQ, 16GB 2400 MHz RAM, 256GB NVMe SSD + 1TB 7200rpm HDD, 1050Ti

Link to post
Share on other sites

Theoretically infinite depending on the controller but there are diminishing returns due to the computational overhead of striping data across devices.

 

Raid imo is only worth it for QLC nand.  SLC should be fast enough already in that SATA controller is the bottleneck.

Link to post
Share on other sites
4 minutes ago, Dedayog said:

Realistically, it's much lower.  I use Raid O for my 2 860 Evo's and in real world gaming I see a 25-30% faster load time vs just a single 860 in long load time games (heavily modded games where load times are in minutes rather than seconds).

 

Would I do it again?  Sure, but is it really worth it?  Depends on the user and use-case.

That's because you're using drives that are very fast already, very close to the limits of throughput on the controler and/or bus.

 

Load times don't necessarily scale linearly wirh drive speed and may be limited by CPU and/or memory speed, e.g. if data needs to be decompressed. So a better test would probably be copying files to/from other fast storage.


Remember to quote or @mention others, so they are notified of your reply

Link to post
Share on other sites
25 minutes ago, Eigenvektor said:

That's because you're using drives that are very fast already, very close to the limits of throughput on the controler and/or bus.

 

Load times don't necessarily scale linearly wirh drive speed and may be limited by CPU and/or memory speed, e.g. if data needs to be decompressed. So a better test would probably be copying files to/from other fast storage.

Right, that's why i outlined my use-case and added it depends on that.  

 

For file transfers Raid 0 is very nice, but it's a very limited set of folks who will truly benefit from that.

 

 


Emma : i7 8700K @5.0Ghz - Gigabyte AORUS z370 Gaming 5 - ThermalTake Water 3.0 Ultimate - G. Skill Ripjaws V 32GB 3200Mhz - Gigabyte AORUS 1080Ti - 750 EVO 512GB + 2x 860 EVO 1TB M.2 (RAID 0) - EVGA Supernova 650 P2 - Fractal Design Define R6 - AOC AGON 35" 3440x1440 100Hz - Mackie CR5BT - Logitech G910, G502, G933 - Cooler Master Universal Graphics Card Holder

 

Plex : Ryzen 5 1600 (stock) - Gigabyte B450M-DS3H - G. Skill Ripjaws V 8GB 2400Mhz - GeForce 8800GTS 640MB - 840 EVO 256GB + Toshiba P300 3TB - TP-Link AC1900 PCIe Wifi - Cooler Master MasterWatt Lite 500 - Antec Nine Hundred - Dell 19" 4:3

 

Lenovo 720S Touch 15.6" - i7 7700HQ, 16GB RAM, 512GB NVMe SSD, 1050Ti, 4K touchscreen

 

MSI GF62 - i7 7700HQ, 16GB 2400 MHz RAM, 256GB NVMe SSD + 1TB 7200rpm HDD, 1050Ti

Link to post
Share on other sites
23 hours ago, Logankilby said:

Theoretically, Is there a limit to how many drives I can put into a raid 0 zero configuration with an ssd cache? Is there a limit or can I have an infinite amount of hard drives?

Theoretically the limit is 65535 devices in a single configuration. 

Practically, most high end hardware controllers have a limit of around 250 drives per controller with the help of expander cards / backplanes. 

 

So say you had 4 x LSI 9305-16e's, each feeding a Storinator 60XL enclosure full of disks per port.

That system along with 16 x Storinator enclosures, would have a total of 960 drives. That would take up 2 full height server cabinets. 

 

Once you get into SAN's with fiber switched storage, then its a whole different game. 

 


Spoiler

Desktop: Ryzen 7 2700x | Aorus X470 Gaming Ultra | EVGA RTX2080 Super | 32GB (4x8GB) Corsair Vengeance RGB Pro 3200Mhz | Corsair H105 AIO, NZXT Sentry 3 | Corsair SP120's | 1TB Crucial P1 NVMe, 4TB WD Black | Phanteks Enthoo Pro | Corsair RM650v2 PSU | LG 32" 32GK850G Monitor | Ducky Shine 3 Keyboard, Logitech G502, MicroLab Solo 7C Speakers, Razer Goliathus Extended, X360 Controller | Windows 10 Pro | SteelSeries Siberia 350 Headphones

 

Spoiler

Server 1: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM650v2 PSU | Fractal S36 Triple AIO | 10 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 1TB Crucial P1 NVMe | LSI 9211-8i HBA

 

Server 2: Corsair 400R | IcyDock MB998SP & MB455SPF | Seasonic Focus Plus 650w PSU | 2 x Xeon X5650's | 48GB DDR3-ECC | Asus Z8NA-D6C Motherboard | AOC-SAS2LP-MV8 | LSI MegaRAID 9271-8i | RES2SV240 SAS Expander | Samsung 840Evo 120GB | 5 x 8TB Seagate Archives | 10 x 3TB WD Red

 

Link to post
Share on other sites

Generally, RAID cards will usually have limit of upto 16 drives per RAID array.

Larger pools are usually a mix - RAID5 on let's say 8 drives, and than stripe or simple spanning on top of that. Also, larger pools (like storage arrays) use those RAID5's (or RAID6, or RAID1) to create logical pool, and than extents are used to create LUN.

 

As it's said, it is possible with SW solutions to go beyond 16 drives, but even with RAID0 you will get diminishing returns.

Link to post
Share on other sites
On 9/9/2020 at 12:42 PM, shoutingsteve said:

RAID 5 is where it's at.  I don't know why it isn't the standard for upper level consumers.

It's reliability is less with drives getting larger in size. If one drive fails, you are vulnerable during the rebuild. And with something like 4tb or larger, rebuilds can take major time. During this time, all of the disks are working hard, just the sort of activity that promotes HDD failure. It's been sidelined by more efficient and safe redundant solutions like raid6, zfs and raid10. 

Link to post
Share on other sites
7 hours ago, Blue4130 said:

It's reliability is less with drives getting larger in size. If one drive fails, you are vulnerable during the rebuild. And with something like 4tb or larger, rebuilds can take major time. During this time, all of the disks are working hard, just the sort of activity that promotes HDD failure. It's been sidelined by more efficient and safe redundant solutions like raid6, zfs and raid10. 

Yes, with larger drives you are better off with RAID6. That's what nowdays is most commonly used in storage systems when using HDD's.

However, for home use, RAID5 is generally more than adequate, unless you are using like 10+ drives.

ZFS is NOT RAID. ZFS uses RAID to achieve redundancy.

RAID10 is much less safe than RAID6. In RAID10 when one drive dies, you are fine.. however, if another one dies which was mirror of first one - you're screwed. RAID6 can tolerate failure of any 2 drives simultaneously.

Only advantage of RAID10 is faster rebuild, but this also depends on amount of drives, etc.. (RAID10 will have fastest rebuild times, but depending on number of drives, load, etc... RAID5/6 can be almost as fast).

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×