Jump to content

6 4tb drives raid 6 or 10

Dybre

I have 6 4tb drives i'm adding to my system for storage/file sharing.  I want good speed and a good safety net vs data lost.  my onboard raid controler can do 0,1,5,6, 10, and jbod.  I was thinking of going raid 5 but have heard that raid 6 is better but raid 10 is faster.  Just need some good pointers on how to set these drives up best.

Link to comment
Share on other sites

Link to post
Share on other sites

my vote goes for raid 6 as it's more than likely fast enough and raid 5 doesn't offer all that much security when you consider the likelihood of a URE when you're rebuilding the array (if one drive fails).

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

~snip~

 

Hey there Dybre,
 
It rally depends on what are you after. Different RAID types offer different pros and have different cons. Some use one or more drives for redundancy, other don't have any redundancy whatsoever. How much usable space would you like to have (as some RAID types would take one or more drives' capacities for redundancy)?
Here's a brief explanation of each RAID type:
 
- RAID0 (Striping) offers a good speed boost to your sequential speeds while it has little effect on the random ones. Data is written simultaneously on all drives at the same time. It offers no fault tolerance so if any of the drives fail you would lose everything on the array. 6x4TB in RAID0 would give you 24TB of usable space and no safety against drive failures.
 
- RAID1 (Mirroring) offers data safety and some speed increase on the read speeds. It basically creates exact copies of your main drive and can retain your data even in one drive fails. In your case 6x4TB would give you 4TB of usable storage space and you can sustain up to 5 drive failures without losing your data. I will basically create 5 copies of your main drive.
 
- RAID5 is a kind of mixture. It uses a mathematical algorithm to distribute parity. It means that your data will be written on (in your case) 3 drives at the same time and the 4th one will be used for the parity. The parity is distributed among all four drives meaning you won't have it on the 4th drive all the time, but rather on all four. 6x4TB in RAID5 should give you 20TB of usable storage space, some speed gain and one-drive fault tolerance before your data is in danger. Have in mind that RAID5 is considered by many a risky option when it comes to array rebuilding. 
 
- RAID6 acts pretty much like RAID5 but uses the capacity of two drives (unlike RAID5 that uses the capacity of 1 drive) for the parity. It is considered more stable and safer to rebuild, but puts more work on the CPU due to the double parity. 6x4TB in RAID6 will give you 16TB of usable space plus fault tolerance of any two drives before your data is in danger. 
 
- RAID10 is basically a combination between the first two types (RAID0 and RAID1). The data is written at the same times over half of the drives and then mirrored on the other half. It's a pretty stable solution but it uses half of your drives for the parity. It can sustain at most half of the drives failing, but it really depends which drives fail. If you lose a drive and it's respective mirrored pair you won't be able to rebuild the array. in your case with 6x4TB you should have 12TB of usable storage space and up to 3 drive failures (unless two of them aren't pairs) before your data is in danger. 
 
- JBOD is simply all the drives working separately. You have 6 physical drive meaning you will have 6 drives in your Disk Management. :) 
 
- JBOD Spann, though, will simply merge all drives into one huge volume, filling the drives one by one, offering no redundancy or speed increase whatsoever. :)
 
I would recommend RAID6 from these as, I personally think, offers the best balance between data redundancy, speed and usable capacity. :)
 
Feel free to ask if you happen to have any questions!  
 
Captain_WD.

If this helped you, like and choose it as best answer - you might help someone else with the same issue. ^_^
WDC Representative, http://www.wdc.com/ 

Link to comment
Share on other sites

Link to post
Share on other sites

@Captain_WD when I went to build my nas I calculated the likelihood of a URE when rebuilding a raid 5 array of 3 4tb wd red drives and found that it was basically guaranteed to happen. So how much lower would the risk be with four drives in raid 6 vs 3 drives in raid 5?

P.s. At your raid 5 description, it's viewed as bad depending on the size of the drives as if they're small enough the likelihood of a URE is very small.

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

~snip~

 

Hey djdwosk97 :)
 
You can find the data for all drives on their spec sheets. Here's an example for the WD Red drives: http://products.wdc.com/support/kb.ashx?id=GnGt0Y
 
Cheers,
 
Captain_WD. 

If this helped you, like and choose it as best answer - you might help someone else with the same issue. ^_^
WDC Representative, http://www.wdc.com/ 

Link to comment
Share on other sites

Link to post
Share on other sites

Since you're wanting to do this for storage and sharing, I'd lean toward RAID 10. You'll have better write and read performance compared to RAID 6, and better redundancy and chance of recovery. RAID 10 means you'd have three RAID 1 arrays with a RAID 0 striped across them. You can lose at most 1 drive from each RAID 1, and so long as you don't lose both drives in a RAID 1 pair, the data is recoverable -- although the recovery time is going to be long given you're talking about 4TB drives. But it'll likely still be significantly faster than rebuilding one of those 4TB drives in a RAID 6 setup.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

 

Hey djdwosk97 :)
 
You can find the data for all drives on their spec sheets. Here's an example for the WD Red drives: http://products.wdc.com/support/kb.ashx?id=GnGt0Y
 
Cheers,
 
Captain_WD. 

 

 

 

Well even though the math is rather sound in the many URE calculation blogs etc the actuality is far from it. Unless you are using a RAID card with absolutely zero features other than doing RAID itself, rare and ones' own fault for using one, rebuild failures or corruption from URE are nowhere near the numbers they indicate.

 

For a very long time now hardware RAID cards do preventative maintenance and active fixing on errors so URE does not cause array failure. Background scubbing/patrol reads are done to check for bad blocks and will fix them with another copy or rebuild from parity data, also during a rebuild if a bad block is found that causes the array rebuilding to stall or fail it will try another copy if it can (RAID10 & 6).

 

Even linux mdadm software RAID can do these checks on schedule to prevent URE/bit rot from causing long term damage and fixing the bad block before it cannot be done.

 

People need to stop and think before just looking at the raw probability numbers and accepting them as true fact for likelihood of actual failure. If these numbers were actually the case RAID would have not been used for so long, and is still use now. Also hardware manufactures would not sit around doing nothing about the issue and not come up with tools to help prevent this, which they have.

 

The only array that I have seen fail to rebuild was a 16x 1TB RAID 5 on the cheapest 3Ware controller possible built in 2008. What actually happened was the card failed during a rebuild so was replaced and the array would get stuck at 90% rebuild, this could have been URE or corruption from the failed card. I cannot know for certain what it was.

 

I have built and maintained many RAID arrays, like hundreds, of many different disk counts and sizes from 3 disks to 32+ of sizes beyond 200TB. If the numbers were the real world probability of failure then almost every single array I have built should have failed, this has not been the case.

 

This is not to say there hasn't been a real concern over the issue in the IT industry, this is why RAID 6 is now the preferred parity RAID. Also other storage technologies have been introduced like ZFS and other custom software features on enterprise storage systems.

 

TL;DR The indicated failure probability is not the real world actual likelihood, don't believe them but do keep them in mind as it is important to know about URE and how to prevent damage from them.

Link to comment
Share on other sites

Link to post
Share on other sites

.....You'll have better write and read performance compared to RAID 6, and better redundancy and chance of recovery. ......

 

We're not sure what sort of RAID solution that OP is going to for, so we'll discount the parity calculation overhead. Writing to a six drive RAID10 will NOT be faster than writing to a six drive RAID6. With the RAID10, you are striping the data across three drives; With the six drive RAID6, you are striping data across four drives. Obviously striping across four will be faster. The only time the RAID10 would be faster is if it s a software RAID array running on woefully underpowered hardware that cannot calculate parity fast enough. I'm running FreeNAS on a i3 with RAIDZ2 (RAID6) and I can easily achieve 700MB/s write speeds.

 

As far as redundancy, RAID6 can loose two drives without losing any data; RAID10 could lose one drive or up to three drives without any data loss. Or it could lose two drive and you'd lose the array, if it were the wrong two drives. The fact is that RAID10 can only safely and reliably loose on drive; anything after that is a gamble.

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

 

Even with parity write penalty and under powered hardware when it comes to read the RAID 6 would be faster, as you explained from the extra srtipe width. Heavy small block random I/O on the other hand RAID 10 would be slightly faster in read and well implemented firmware will be able to queue up and read from the second drive in each mirror. Also in mixed read and write work loads RAID 10 would perform slightly better.

 

Either way RAID 6 on hardware RAID with write back cache etc it will be faster in the majority of use cases than RAID 10 especially in a home user scenario. ZFS has good caching too but because it is so good and you can also do SSD caching using RAID 10 style configuration with that is just wasting space.

Link to comment
Share on other sites

Link to post
Share on other sites

Writing to a six drive RAID10 will NOT be faster than writing to a six drive RAID6. With the RAID10, you are striping the data across three drives; With the six drive RAID6, you are striping data across four drives. Obviously striping across four will be faster. The only time the RAID10 would be faster is if it s a software RAID array running on woefully underpowered hardware that cannot calculate parity fast enough. I'm running FreeNAS on a i3 with RAIDZ2 (RAID6) and I can easily achieve 700MB/s write speeds.

Allow me to call bullshit on this. 700 Megabytes per second write speeds? Let me guess, you're running SSDs in that RAID setup on a 10Gbps LAN. By definition you cannot get 700 megabytes/second of throughput on a Gigabit LAN setup, and there isn't a wireless LAN standard that can sustain that kind of throughput either. So let's be a little more realistic. This person is talking about platter drives. The fastest platter drives are going to max out at around 150 megabytes/sec sustained write speeds -- maxing out a SATA I connection, meaning it's fast enough to saturate a Gigabit LAN connection.

And when discussing RAID 5/6 versus RAID 10, you have to account for the parity calculation. In RAID 5 and 6, there is a write performance penalty due to the parity calculation. It doesn't matter if you're talking hardware or software driven RAID 6.

The write penalty in RAID 1 and 10, however, is limited to just the drives being written since there is no parity calculation, since the data needs to be written to, at minimum, two drives. But a well-implemented controller should be able to keep the penalty to a minimum by keeping all drives seeks in relative parallel. Even software-implemented RAID 1 can do that well.

And of course data being striped across 4 drives will be faster than stiping data across 3. What a concept!!! You mean to tell me that a 4-drive RAID 0 is going to be faster than 3-drive RAID 0? I never knew it till you pointed that out. Thanks for the education!

(...that last bit is sarcasm, by the way)

 

As far as redundancy, RAID6 can loose two drives without losing any data; RAID10 could lose one drive or up to three drives without any data loss. Or it could lose two drive and you'd lose the array, if it were the wrong two drives. The fact is that RAID10 can only safely and reliably loose on drive; anything after that is a gamble.

The fact is that RAID 10 can lose up 1 drive in each pair without loss of data. Once you lose a full pair of drives, you lose the array, but that's true in RAID 1 as well as RAID 10. That's why backups are important.

But RAID 1 -- and by extension, RAID 10 -- doesn't limit you to just having a pair of drives. You can spread the mirroring out to as many drives as you want, depending on how much redundancy you're after and the configuration the controller supports. This means you could have a 6-drive RAID 10 setup comprised of two three-drive RAID 1 arrays with a RAID 0 striped across them. Now you can lose 2 drives in each RAID 1 and so long as you don't lose all 3 in any of the RAID 1 arrays, you're safe. In this instance the total storage would be only 8TB out of an available 24TB, which is one hell of a trade-off in the name of redundancy.

Now once you lose 1 drive, it's highly probable that you'll lose another in short order, especially if the drives are all from the same lot -- so if you buy your drives at different times or from different retailers, you'll lessen that likelihood. This is true, though, regardless of your RAID setup which is why backups are still important -- a lesson that Linus had to learn very, very quickly. You must still plan for contingencies.

So should a drive die, which RAID setup will rebuild faster: 6 or 10? If you're going to say RAID 6, you're completely clueless on that mark. Rebuilding 1 drive in a RAID 10 will always work faster than rebuilding 1 drive in RAID 6 since you're just directly copying data from the mirror instead of rebuilding from parity. This means that rebuilding 2 drives on a RAID 10 will be significantly faster than rebuilding 2 drives in a RAID 6 since the RAID 10 drives can be rebuilt in parallel without any reliance on any other RAID pairs in the array.

The downside is that you can't really use a RAID 1 or 10 while you're trying to rebuild it, or at least you shouldn't use it as that'll slow down the rebuild, and the rebuild may need to completely restart if any writes are made to the drives. That is where RAID 6 gains over RAID 10, though in both you'd suffer a major read penalty trying to use the array as it's being rebuilt. But where RAID 10 gains is with a phenomenon called a partial media failure that can arise because RAID 6, like RAID 5, doesn't check parity on read. With RAID 1 and RAID 10, any failed data read should cause the drive to be flagged as bad.

The reason to opt for RAID 6 over RAID 10 is capacity. Since RAID 10 involves mirrored drives, your capacity is automatically cut in half at minimum, so in a RAID 10 the OP would have at most 12 TB of storage out of 24TB available. RAID 6 would give him 16 TB out of 24 TB.

But you said that your RAID setup is RAID-Z2. That is not the same as RAID 6. It is different from RAID 6 in a very key way: it can recover from a partial media failure by checking data integrity on reads and attempting "self healing" should there be any problems. RAID 6 and RAID 5 do not do this, leaving open the chance of data corruption that RAID 10 can help avoid since the controller should flag the drive as bad if there are any read errors.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

To amend my previous, RAID 10 exists to add redundancy and quick recovery to a RAID 0. RAID 0 is the least safe RAID. You lose one drive, you lose the entire array, so the performance comes at one hell of a gamble. But RAID 1's essence is quick recovery at the loss of storage capacity. So RAID 10 means adding recovery and redundancy to the RAID 0.

This is why nested RAID 1x setups like RAID 10 exist. The mirroring means quick recovery. To that end, there are additional options: RAID 15 and RAID 16. These add the striping and parity of RAID 5/6 over the mirroring of RAID 1. With RAID 10, you have a RAID 0 striped across at least 2 RAID 1 pairs. With RAID 15, you have a RAID 5 striped across at least 3 RAID 1 pairs, while RAID 16 stripes RAID 6 over at least 4 RAID 1 pairs.

Why do this? Additional level of fault tolerance. Actually it's a very significant amount of fault tolerance. You can lose any RAID 1 pair and be able to recover. With RAID 16, you can lose any two pairs and be able to recover. Additionally on top of being able to lose full pairs, you can lose additional single drives on top of that and still be able to recover. The recovery is still slower than RAID 10, and greater loss of drives means significantly greater recovery time, but it's possible.

The only way to have a comparable amount of fault tolerance with RAID 10 is to have each RAID 1 have three or more drives, but you still run the risk of losing all data if you lose just 1 of the RAID 1 sets. With RAID 15 you can lose 1 of the RAID 1 sets and be able to recover. With RAID 16, you can lose two and still recover. Additionally the other RAID 1s mean, again, you can lose additional drives on top of that and still be able to recover -- though at that point you'd have to shut down access to the storage to allow the recovery to proceed and hope you don't lose additional drives in the process.

The trade-off, of course, is additional loss of storage. You start with half since you're starting with RAID 1s, then lose additional capacity on top of that for the parity. But the gain is fault tolerance without any significant increase in recovery time -- provided you're not losing drives left and right. Set it up to stripe a RAID-Z or RAID-Z2 over the RAID 1s and you have even better fault tolerance since you're adding in RAID-Z's parity checking.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

Allow me to call bullshit on this. 700 Megabytes per second write speeds? Let me guess, you're running SSDs in that RAID setup on a 10Gbps LAN. By definition you cannot get 700 megabytes/second of throughput on a Gigabit LAN setup, and there isn't a wireless LAN standard that can sustain that kind of throughput either. So let's be a little more realistic. This person is talking about platter drives. The fastest platter drives are going to max out at around 150 megabytes/sec sustained write speeds -- maxing out a SATA I connection, meaning it's fast enough to saturate a Gigabit LAN connection........

I really and truly, honestly, could not care one iota less what you call. You can call bullshit if you like, or you can call the ghostbusters. I don't care.

 

But if you must know, I run two 128 GB Samsung EVO 840 in RAID0 on an ASUS Z87 ROG Formula motherboard with an i7 4970k. Crystal Disk Mark shows performance of about 1.2 Gigabytes per second read and write on the RAID array.

 

Transfers are to my FreeNAS server running on a Supermicro X10-SLL motherboard with an i3 4130. My storage setup on the FreeNAS server consists of a pair of Intel SSDs in a mirror as a boot device and a vdev consisting of 11 2TB platter drives. 

 

I have a Mikrotik Switch and I have 10Gb/s network cards in my worstation and my FreeNAS server.

 

When transferring large, multi-gigabyte files, I can easily sustain 700-800 megaBYTES per second. The only reason that I'm not able to achieve over one gigabyte per second is because as of right now, I'm using SMB for sharing to my Windows computer and SMB has terrible performace. If I had a more powerful CPU in the FreeNAS server I would hit 1 GB/s. I'm going to be installing Windows 8.1 Enterprise this weekend and I'm going to try using Windows services for UNIX to enable NFS sharing and see if I can saturate the 10Gb connection.

 

This is not some sort of voodoo black magic configuration I have. This is all very common equipment that can be had for pretty cheap.

 

As far as your attempted recovery from giving poor/incorrect advice - Thanks for the lesson in RAID, but you're not telling me anything that I didn't already know. You can argue about any scenario that you can conjure up in your imagination, but the fact of the matter was that your original post was SPECIFICALLY about using six drives in a RAID 6 array, versus using six drives in a RAID10 array.

 

You stated that the RAID 10 would be faster for both read and write. That is not necessarily the case.

 

There is of course a write penalty, but the question is whether the write penalty is so great that it would affect real world performance. If my lowly i3 is capable of calculating double parity data quickly enough to sustain 700MB/s writes, I can't envision a scenario where there would be a visible drop in performance on a transfer over gigabit ethernet. Reads could potentialy be faster on RAID10, but that depends on the specifics of the data being read and of the RAID controller/software. Not really something you can just use a blanket statement to cover.

 

All this talk on performance is a moot point anyway though; You've already made the assumption that OP is using gigabit ethernet and both options will perform faster than OP's  network, so there is no point of even mentioning speed.

 

Reliability - This is simple; I don't understand why you had to write a massive wall of text to try and backpedal. With RAID6, you can reliably drop ANY two drives without loosing data. ANY TWO. That is NOT the case with RAID 10. With RAID 10 you can only ever RELIABLY loose one drive. This is inarguable. Yes, you might be able to loose three drives, or you might drop a second drive and loose the array. It's just a simple matter of fact that you can not walk up to a RAID10 array, pull two random drives out and be 100% certain that you didn't just loose the array - which is something that you CAN do with RAID6.

 

In the end RAID10 is not going to be any faster, It's going to be less robust to drive failure and you'll have 25% less available storage for the same amount of money. So, care to explain again why in this specific scenario of six drives, where the available options are RAID6 or RAID10, you think that RAID 10 would be a better option?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

This is why nested RAID 1x setups like RAID 10 exist. The mirroring means quick recovery. To that end, there are additional options: RAID 15 and RAID 16. These add the striping and parity of RAID 5/6 over the 

 

RAID 51/61 is much more common, but sure you could do that. I wouldn't due to far too high $/GB.

Link to comment
Share on other sites

Link to post
Share on other sites

I have a Mikrotik Switch and I have 10Gb/s network cards in my worstation and my FreeNAS server.

-snip-

This is not some sort of voodoo black magic configuration I have. This is all very common equipment that can be had for pretty cheap.

Where did I say it's a black magic configuration? The 11 drives all in the same array is why you're getting the performance you're seeing. But I knew from what you typed that one of two things was true: you were either lying through your keyboard, or you had a 10GB setup. But to say that a 10GB setup can be "had for pretty cheap" means you and I have a far different definition of what constitutes "cheap".

 

Thanks for the lesson in RAID, but you're not telling me anything that I didn't already know.

In case you're forgetting, you're not the only person who might come across this thread. As such, I might not be telling you anything that you didn't already know, but it could help educate someone else who is coming across this.

 

You stated that the RAID 10 would be faster for both read and write. That is not necessarily the case.

True, it may not be the case as there are several determining factors -- the controller is going to be one of the major ones. I can only speak to what has been published. RAID 10 scales in performance similar to RAID 0 -- not directly in line with RAID 0, but pretty close. RAID 6 and RAID 5 don't scale up nearly as cleanly. And the fact is that RAID 10 has been demonstrated to have better write performance over RAID 6 and RAID 5.

Read performance is where things aren't nearly as cut and dry. I did find one article that demonstrates that, between RAID 5 and RAID 10, random reads favor RAID 5. Random writes still favor RAID 10. So the question is what the OP will be using it for. Since he said storage and sharing, it could be a toss-up as to whether to go with RAID 6 or RAID 10. Fault tolerance is going to likely be the concern here, along with recovery.

And in recovery times, RAID 10 wins. And in the event of loss, RAID 10 won't show a performance hit.

 

All this talk on performance is a moot point anyway though; You've already made the assumption that OP is using gigabit ethernet and both options will perform faster than OP's  network, so there is no point of even mentioning speed.

Agreed. Which is why I didn't spend a lot of space talking about it. The difference in performance between a RAID 6 and a RAID 10 on a NAS on a Gigabit network may be largely unnoticeable.

 

Reliability - This is simple; I don't understand why you had to write a massive wall of text to try and backpedal.

Backpedal? Referring to my point above about you not being the only one coming across this thread, I wrote that "wall of text" for others who my come across it. In that "wall of text", you'll also see that I agree with this point:

 

It's just a simple matter of fact that you can not walk up to a RAID10 array, pull two random drives out and be 100% certain that you didn't just loose the array - which is something that you CAN do with RAID6.

In a situation of a random failure, yes it's possible to lose the array. In a random failure, it's possible to lose a RAID 6. But what is the likelihood of that catastrophic failure occurring, of losing 3 drives in a RAID 6 or losing the right pair of drives in a RAID 10? It's quite low. That's why a lot of my discussion initially was not on fault tolerance. The amending post went into slightly more detail.

There are trade-offs to RAID 6 and RAID 10. RAID 6 gives you better capacity at the trade-off of longer recovery times. RAID 10 gives you less capacity for better recovery times. With both, random hardware or drive failures could spell doom to your array. Where have I disagreed with this point?

Yes in a RAID 6 you can pull any two drives and be assured that the array is still otherwise intact and recoverable. After losing one drive in a RAID 10 with three RAID 1 pairs, there is a 20% likelihood that the next drive failure takes down the array, a probability that doesn't exist with RAID 6. Again, where did I disagree with that point?

 

In the end RAID10 is not going to be any faster, It's going to be less robust to drive failure and you'll have 25% less available storage for the same amount of money. So, care to explain again why in this specific scenario of six drives, where the available options are RAID6 or RAID10, you think that RAID 10 would be a better option?

Two reasons. First: recovery time. RAID 10 will always be able to recover faster from a drive failure since it's simply mirroring data. Second: performance loss in event of failure. In RAID 10, the performance loss in the event of a drive failure is negligible. In RAID 5 and 6, it's noticeable. The performance loss only occurs when you attempt to rebuild with RAID 10, which will happen in RAID 6 as well. But then, in either instance, access to the array should be cut off during a rebuild to allow for a faster recovery.

In any case, there is always the chance you'll lose the array, so backups are still a must.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

@brandishwar @braneopbru Come on guys spamming this thread with walls of text arguing exact specifics of who is more technically correct than the other isn't helping the OP or other readers. This is the problem with a lot of IT topics, both view points can be equally correct depending on the premise each person is using.

 

Also it is never as simple as saying one type is faster than the other. Hardware, write back cache, disk type, usage profile etc etc all influence performance. All one can say is generally speaking RAID5/6 is faster at reads and RAID 10 is faster at writes.

 

http://www.kendalvandyke.com/2009/02/disk-performance-hands-on-part-5-raid.html

image%5B50%5D.png?imgmax=800

 

The other consideration which is much more important to the majority of people than performance is usable capacity and manageability. RAID 5/6 arrays are simpler to add disks to and can do single disks additions. Unless there is a specific performance and usage profile that requires RAID 10 always use RAID 6. Almost every enterprise disk storage system that I have seen uses RAID 6 or equivalent (double disk protection) for their disk pooling technologies. If you need more resiliency create more pools to split data to reduce failure domain size and consider replication to another system or scale out system design.

Link to comment
Share on other sites

Link to post
Share on other sites

Where did I say it's a black magic configuration? The 11 drives all in the same array is why you're getting the performance you're seeing. But I knew from what you typed that one of two things was true: you were either lying through your keyboard, or you had a 10GB setup. But to say that a 10GB setup can be "had for pretty cheap" means you and I have a far different definition of what constitutes "cheap"......

 

When the first sentence of your post is to call somebody a liar for no particularly good reason, the replies tend not to be too friendly. I could have and should have chosen my wording a little better.

 

You can regularly find Mellanox 10Gb cards on ebay for under $15. I bought the two that I have for $28 for the pair. Transceivers will set you back $16 each from fibrestore and 50 feet of optical cable is under $12. That's about $72 to get two computers talking at 10Gb on a private network. If you want a switch, you can get the 48 port Quanta lb4m that has two 10Gb ports for $80 or less if you are patient and you'd need a DAC to connect the NAS to the switch for another $15. 

 

You could have 10Gb networking in your house for between $75 - $170 in a days. Don't know about you, to me, that's CHEAP.

 

As far as everything else. You make a ton of valid points, but none of them are really relevant. Rebuild time and access speed during a rebuild is hardly a reason to make a decision for a home setup. There might be specific situations in business and industry that would benefit from RAID10, but for a home server that's going to most likely store mostly media I think the biggest deciding factor should be acceptable redundancy and most amount of usable storage for dollars invested.

 

In my opinion, with speed being a moot point, usable storage being greater and redundancy being superior with RAID6, there is really no reason for a home user to ever pick a nested RAID when using six drives.

Link to comment
Share on other sites

Link to post
Share on other sites

You can regularly find Mellanox 10Gb cards on ebay for under $15. I bought the two that I have for $28 for the pair. Transceivers will set you back $16 each from fibrestore and 50 feet of optical cable is under $12. That's about $72 to get two computers talking at 10Gb on a private network. If you want a switch, you can get the 48 port Quanta lb4m that has two 10Gb ports for $80 or less if you are patient and you'd need a DAC to connect the NAS to the switch for another $15.

You could have 10Gb networking in your house for between $75 - $170 in a days. Don't know about you, to me, that's CHEAP.

Learn something new every day. I'll have to look into all that, so thanks for that info. I'll have to check eBay for used 10GbE switches as well. I think I've found a project for later this year.

Otherwise, I think we've said all that can be with regard to RAID 6 vs. RAID 10.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Thanks for all this info, I'm running raid 6 right now copies account from old firewire raid tower on mac to my new pc/server is around 100mb/sec think the bottleneck is the firewire tower and the slow drives in it.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×