Jump to content

Am I Missing the Point - Raid 1/Raid 10 question

Hi,

 

First post on the forum, but a long time viewer!

 

I've been looking into Raid recently in anticipation of setting up a small NAS for home bits and some of my small business files. I'm having trouble seeing why anyone would go with Raid 1 as opposed to Raid 10. Hoping someone can tell me the missing part that i'm missing!

 

As I understand it, Raid 1 mirrors 2 drives. One drive can fail, read speeds may be slightly better than usual but writes will be no different or slightly worse.

 

So, why would you ever use Raid 1, instead of Raid 10, which offers the same redundancy but more performance since is stripes its data writing? Performance is an decision maker at all in my situation but if its the same level of redundancy then why not?

 

I'll probably be using Raid6 anyway, but just wondered about the above!

 

Thanks

Ben

Link to comment
Share on other sites

Link to post
Share on other sites

Raid 10 needs 4 disks. Raid 1 does not. Raid 10 theoretically is faster. But gains no extra redundancy (both system can survive 1 drive failure on each half... while 10 can survive 2 drives failing, it requires both drives to be opposite stripes).

Link to comment
Share on other sites

Link to post
Share on other sites

Enterprise gravitates to raid on mirrors due to the reliability and extra redundancy of it.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

Ok. Thanks for your replys.

 

So having multiple Raid 1's instead of 1 Raid 10 for instance means if 1 of the Raid 1's fails, not all of your data is gone, only some?

Link to comment
Share on other sites

Link to post
Share on other sites

There are also read performance gains for raid 1.

 

Yeah, you'll see 2, 3, sometimes 4 or more drives all mirrored allowing for n-1 failures at the same time.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

Ok that makes sense thank you. I can see how in an enterprise (with enterprise budget) that would be a safe way of handling things.

 

Thanks!

Link to comment
Share on other sites

Link to post
Share on other sites

Keep in mind raid is NOT a backup. If a raid 1 drive fails or corrupts it will mirror the issues over to the working drive as long as it stays active and isn't flagged as damaged.

Raid is used for quick continuation of operations. It's to prevent a drive failiure breaking the workflow.

Link to comment
Share on other sites

Link to post
Share on other sites

I understand, Raid is not a backup. I will backup elsewhere also.

 

For me its another level of protection, not continuity of service that i'm interested. It just means I can have one or 2 drives fail (depending on which Raid type) and not lose the data on that NAS as opposed to just having a stand alone drive which if it fails, has no redundancy. 

Link to comment
Share on other sites

Link to post
Share on other sites

with raid 10 if you lose the wrong 2 drives your whole array is done for.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

Easily told RAID 1 is something i would use for a small server with minimal storage to run DNS, routers and things like that. Things that are not mission critical.

RAID10 on the other hand might be perfect for VOD streaming servers as you can add so much read write speed with the drives.

 

So in the end it all depends on what you are doing.. As the NAS example RAID 10 would be good if you have 10+ drives and you want to use it for plex or something alike that.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Ben3991 said:

I've been looking into Raid recently in anticipation of setting up a small NAS for home bits and some of my small business files. I'm having trouble seeing why anyone would go with Raid 1 as opposed to Raid 10. Hoping someone can tell me the missing part that i'm missing!

 

As I understand it, Raid 1 mirrors 2 drives. One drive can fail, read speeds may be slightly better than usual but writes will be no different or slightly worse.

 

So, why would you ever use Raid 1, instead of Raid 10, which offers the same redundancy but more performance since is stripes its data writing? Performance is an decision maker at all in my situation but if its the same level of redundancy then why not?

The better question is actually why would you pick RAID 10 over RAID 5 or 6.

 

RAID 1 is generally only used for OS disk mirror or in a NAS that only has 2 bays.

 

Everything below will be in respect for hardware RAID i.e. LSI 9361

 

RAID 10 has it's own set of benefits and drawbacks.

  • Pros: Low write latency, write-back cache not required
  • Cons:  Storage efficiency as you only get 50% usable space, cannot be expanded with more disks once created, if a single mirror fails (both disks) the whole array is lost

RAID 10 isn't actually a good choice on hardware RAID controllers and should only be used on applications that require low write latency and have small data growth i.e. DB and DB Logs.

 

RAID 5/6 again has it's own set of benefits and drawbacks.

  • Pros: Better storage efficiency, can be expanded with more disks
  • Cons: Write-back cache required (in my opinion), rebuilds from disk failures stresses all disks in the array increasing potential of another failure before rebuild completes

RAID 5 and 6 actually do have excellent read and write performance so long as you have active and working write-back cache. Due to having more active spindles in larger arrays often these will outperform a RAID 10 array with the same number of disks, with the exception of highly random large queue depth writes.

 

That's all great information for hardware RAID but what about software solutions like ZFS, btrfs or Storage Spaces, well unfortunately due to the nature of them being software results and recommendations vary between all of them so generalizations like above don't apply universally to all of them.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, leadeater said:

The better question is actually why would you pick RAID 10 over RAID 5 or 6.

If implemented in software, lower computational cost? I think I prefer software solutions so you're not hardware dependant.

 

Don't know if it has a name, the system used in Unraid. Performance sucks, but it is really simple. Read performance comparable to a single drive, write... is worse in standard configuration (limited by parity read-write operation), although you can negate this if wanted by using a separate cache. If drives should fail beyond the recovery capacity, remaining drives still have retrievable data on them.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, porina said:

If implemented in software, lower computational cost? I think I prefer software solutions so you're not hardware dependant.

Even in software RAID the inability to expand RAID 10 is sometimes there, ZFS is one where you can just keep adding mirrors to the pool and it'll build out the stripe happily. Storage Spaces can too, not sure about btrfs though.

 

Parity calc now days isn't that big of a demand unless you're pushing really high throughput.

 

For things like 4-8 disk arrays I still prefer hardware RAID, simple, rock solid and just works with next to no effort and will out perform a comparable system doing it with software. Software based easily take the win though with scaling and performance at scale.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

Even in software RAID the inability to expand RAID 10 is sometimes there, ZFS is one where you can just keep adding mirrors to the pool and it'll build out the stripe happily. Storage Spaces can too, not sure about btrfs though.

 

Parity calc now days isn't that big of a demand unless you're pushing really high throughput.

 

For things like 4-8 disk arrays I still prefer hardware RAID, simple, rock solid and just works with next to no effort and will out perform a comparable system doing it with software. Software based easily take the win though with scaling and performance at scale.

Sorta (mirrors yes, disks no) So if you have 3 drives in a Raidz 1 you can't add a 4th. You need to add 3 more in a vdev to expand it. (like Raid 50, EX: http://alp-notes.blogspot.com/2011/09/adding-vdev-to-raidz-pool.html ) *currently* that is.. this has been solved with ZFS Reflow and a 4th can now be added but I don't think its actually implemented on anything yet. That type of configuration is very fast hardware depending.

 

Hardware raid is just a computer that runs software (firmware) and lies to your OS about being a disk. Yes, it has cache and logic but so does your OS. Personally I'd trust ZFS over anything if it was an option to use it.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, jde3 said:

Sorta (mirrors yes, disks no) So if you have 3 drives in a Raidz 1 you can't add a 4th.

Yes I know, I only specified that you can expand it with more disks not how. Hardware RAID 10 cannot be expanded in any way at all without destroying and recreating.

 

8 hours ago, jde3 said:

You need to add 3 more in a vdev to expand it.

Actually you can add any vdev configuration to a pool, it doesn't need to match other vdev configurations but you should.

 

As I said software solutions are quite different to hardware RAID cards, they all work slightly different where as hardware RAID even from different vendors all work the same. Some might lack online expansion features but the way the RAID levels function, how the data is distributed and accessed or written is the same whether it's an LSI or Adaptec card but the same isn't true of software solutions.

 

8 hours ago, jde3 said:

Hardware raid is just a computer that runs software (firmware) and lies to your OS about being a disk. Yes, it has cache and logic but so does your OS. Personally I'd trust ZFS over anything if it was an option to use it.

ZFS isn't the best solution for all situations though and if you don't want or need externalized storage then it's not even a option for something like ESXi. Far as data integrity goes, what people harp on about for ZFS, that real world situation isn't anything like what is portrayed. I have never encountered, or heard of, a hardware RAID array being corrupted due to silent errors on disks and I've seen and configured a hell of a lot. Hardware RAID cards do patrol reads for a reason and can correct errors, if a 1056 disk hardware RAID array (please do not do this now days!) didn't suffer from this after running continuously for 10 years surviving multiple disk failures and rebuilds then I'd like to see someone with a configuration even more at risk than this one was.

 

I have seen and had to fix failed RAID controllers, and sometimes when they fail they corrupt the array but only seen cheap 3ware controllers do that though.

 

Not that I'm anti ZFS or anything but nothing annoys me more than rhetoric that a lot of people put out there about hardware that have never used it, over years, who have no idea what they are actually talking about. Theoretical issues that can be explicitly demonstrated isn't the same thing by a long shot as real life usage and risk, it's important to know about that risk with hardware RAID but the solution to that is backups which should already be happening anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

About 10 years ago when ZFS hit the market on Sun systems I pretty much stopped using / started replacing hardware raid controllers. Being that I was in a Sun friendly shop tho that makes sense. Your right on a lot of aspects here, life did go on before that and don't get me wrong, ZFS dosen't make sense in every type of configuration. (Tho it can do KVM to a zvol if your infrastructure is KVM based as opposed to ESXi).

 

I'm simply saying the benefits that you get outweigh a lot of the cons. Yes, it may be a bit slower but it's also doing more work. (and I'm not even sure that's true with lz4 and compressed arc now.) The one situation you never need to worry about with ZFS is if you need a specific piece of hardware or even a specific firmware on that hardware to restore an array.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, jde3 said:

Yes, it may be a bit slower but it's also doing more work.

Hardware RAID always sucked with scaling though, ZFS does a way better job at the much larger disk counts from what I've seen. Then you have all the extra nice stuff like snapshots, clones, dedup etc that are all there ready to be used and don't require extra software or plugins.

 

Not really a ZFS person though, I'm in the other corner of the ring in the fight of who is coping who, in other words Netapp.

Link to comment
Share on other sites

Link to post
Share on other sites

They scare me.. bout.. oohh say 2000~ ish era I had a Compaq raid controller die on a older model system (of course right, why would the new one die? lol) and getting a replacement from HP after they bought them out was a total bitch. I think I got it working got the data off and sent that thing off to the fires of hell. Once bitten twice shy I guess. :)

 

edit: oh ya the other Compaq system caught fire. Interesting place to work.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×