Jump to content

Which raid card to buy for raid 6?

Go to solution Solved by Jarsky,
1 hour ago, Mads-Ejnar said:

Intel RST

Ouch....I understand now why you want a better solution....

Are you completely set on using a RAID card? Or have you considered better software solutions like Windows Storage Spaces which allows for 2 drive parity? 

Personally I currently have a raid with 12 x 3TB WD Red's in a RAID6 on my LSI-9271-8i - really good card but quite expensive. 

If you want an LSI raid card you might want to lean towards a 9260-8i....theyre far cheaper, and you can get the CacheCade key for a fraction of the price which enables SSD caching. You'll also have to make sure whichever card you get you get a BBU (battery backup unit) as well, so you can enable write-back which is far higher performance than write-through. 

Hey you all.

I'm looking to buy a raid card that can do raid 6. 
But there are so many cards out there and i don't know what i should look for.

Right now i use 4x3TB wd red in software raid 5. And the performance are ok, but not that good.
I want to buy 2 more disk so i will get 6x3TB and then run them in raid 6. So i have better data protection.

Should i just buy a cheap raid card on ebay or get a new on? and what brand should i look for?
I have look at this https://www.tomshardware.com/reviews/sas-6gb-raid-controller,3028-3.html but thats from 2011. But those card are really cheap on ebay right now.
But don't know if i buy a new one, if it will that much better or not? 
I hope to buy one below 200$ if that possible.

So is there anyone here that know a bit more about raid cards that will help me with this one :) 

You sir, have a wonderful day ?  | Sorry for my bad dyslexia English  :(

Spoiler

Gaming PC - MB: MSI MEG X570 GODLIKE - CPU: AMD 3950X - RAM: G.Skill TridentZ RGB DDR4-3733 C17 DC - 32GB - GPU: ASUS GeForce RTX 2080 Ti ROG STRIX OC - 11GB GDDR6 - Soundcard: Creative Sound Blaster ZXR - Headset: Sennheiser PC350SE - PSU: Corsair AX860i - SSD: Samsung 980PRO 1TB - SSD: Samsung 970 Evo+ 1TB

HTPC - MB: MSI B450M MORTAR - CPU: AMD 2400G - RAM: Corsair Vengeance LPX Black DDR4 3200MHz 32GB - GPU: AMD 2400G - PSU: Seasonic Prime Titanium Fanless 600W - SSD: Samsung EVO 120GB - HDD: 7x 3TB WD RED in raid 6 - Raid controller: MEGARAID LSI 9361-8I

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Mads-Ejnar said:

And the performance are ok, but not that good.

Is the performance the reason why you're looking for a separate RAID-card? RAID5 and RAID6 are by their very nature slow, the bottleneck is the drives. Have you actually checked if your server's CPU is idling a lot, waiting for I/O, when you're accessing the RAID? If it is, then a separate RAID-card won't do jack shit, it'll only help if your server's CPU can't handle the load.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

You can't move from "software" raid to "hardware" raid....you can migrate raid levels, but you can't migrate between the 2 types without re-initializing your array. 

What Software RAID are you running? Windows Server RAID-5? MDADM?

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, WereCatf said:

Is the performance the reason why you're looking for a separate RAID-card? RAID5 and RAID6 are by their very nature slow, the bottleneck is the drives. Have you actually checked if your server's CPU is idling a lot, waiting for I/O, when you're accessing the RAID? If it is, then a separate RAID-card won't do jack shit, it'll only help if your server's CPU can't handle the load.

90% af the time i use my raid, the cpu is around 100%. So right now my cpu is a bottleneck as i see it. 
But it's also to have better protection, so i can lose 2 drives and not just 1.
 

 

21 hours ago, Jarsky said:

You can't move from "software" raid to "hardware" raid....you can migrate raid levels, but you can't migrate between the 2 types without re-initializing your array. 

What Software RAID are you running? Windows Server RAID-5? MDADM?

I know that, i have a friend with where i can borrow a NAS, so i can move my data and then make the new raid, and then move it all back :)
It's on Windows 10 and Intel RST

You sir, have a wonderful day ?  | Sorry for my bad dyslexia English  :(

Spoiler

Gaming PC - MB: MSI MEG X570 GODLIKE - CPU: AMD 3950X - RAM: G.Skill TridentZ RGB DDR4-3733 C17 DC - 32GB - GPU: ASUS GeForce RTX 2080 Ti ROG STRIX OC - 11GB GDDR6 - Soundcard: Creative Sound Blaster ZXR - Headset: Sennheiser PC350SE - PSU: Corsair AX860i - SSD: Samsung 980PRO 1TB - SSD: Samsung 970 Evo+ 1TB

HTPC - MB: MSI B450M MORTAR - CPU: AMD 2400G - RAM: Corsair Vengeance LPX Black DDR4 3200MHz 32GB - GPU: AMD 2400G - PSU: Seasonic Prime Titanium Fanless 600W - SSD: Samsung EVO 120GB - HDD: 7x 3TB WD RED in raid 6 - Raid controller: MEGARAID LSI 9361-8I

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Mads-Ejnar said:

Intel RST

Ouch....I understand now why you want a better solution....

Are you completely set on using a RAID card? Or have you considered better software solutions like Windows Storage Spaces which allows for 2 drive parity? 

Personally I currently have a raid with 12 x 3TB WD Red's in a RAID6 on my LSI-9271-8i - really good card but quite expensive. 

If you want an LSI raid card you might want to lean towards a 9260-8i....theyre far cheaper, and you can get the CacheCade key for a fraction of the price which enables SSD caching. You'll also have to make sure whichever card you get you get a BBU (battery backup unit) as well, so you can enable write-back which is far higher performance than write-through. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

I just think it will be better and more safe to have a real raid card then software.
The 9260-8i is really cheap on ebay, but the 9271-8i, is also not that bad on ebay. 
So if the 9271 is much better, maybe i will go with that,

What about cable, can i just buy any SAS to Sata cable? or is there some that are better then others?

You sir, have a wonderful day ?  | Sorry for my bad dyslexia English  :(

Spoiler

Gaming PC - MB: MSI MEG X570 GODLIKE - CPU: AMD 3950X - RAM: G.Skill TridentZ RGB DDR4-3733 C17 DC - 32GB - GPU: ASUS GeForce RTX 2080 Ti ROG STRIX OC - 11GB GDDR6 - Soundcard: Creative Sound Blaster ZXR - Headset: Sennheiser PC350SE - PSU: Corsair AX860i - SSD: Samsung 980PRO 1TB - SSD: Samsung 970 Evo+ 1TB

HTPC - MB: MSI B450M MORTAR - CPU: AMD 2400G - RAM: Corsair Vengeance LPX Black DDR4 3200MHz 32GB - GPU: AMD 2400G - PSU: Seasonic Prime Titanium Fanless 600W - SSD: Samsung EVO 120GB - HDD: 7x 3TB WD RED in raid 6 - Raid controller: MEGARAID LSI 9361-8I

Link to comment
Share on other sites

Link to post
Share on other sites

Not to sure on the difference myself. 

Any SFF8087 to Sata breakout cables should be fine. I've used 4 different types off Amazon and they all work. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Great, thanks for the help :) 

You sir, have a wonderful day ?  | Sorry for my bad dyslexia English  :(

Spoiler

Gaming PC - MB: MSI MEG X570 GODLIKE - CPU: AMD 3950X - RAM: G.Skill TridentZ RGB DDR4-3733 C17 DC - 32GB - GPU: ASUS GeForce RTX 2080 Ti ROG STRIX OC - 11GB GDDR6 - Soundcard: Creative Sound Blaster ZXR - Headset: Sennheiser PC350SE - PSU: Corsair AX860i - SSD: Samsung 980PRO 1TB - SSD: Samsung 970 Evo+ 1TB

HTPC - MB: MSI B450M MORTAR - CPU: AMD 2400G - RAM: Corsair Vengeance LPX Black DDR4 3200MHz 32GB - GPU: AMD 2400G - PSU: Seasonic Prime Titanium Fanless 600W - SSD: Samsung EVO 120GB - HDD: 7x 3TB WD RED in raid 6 - Raid controller: MEGARAID LSI 9361-8I

Link to comment
Share on other sites

Link to post
Share on other sites

I had the 9261-8i for a while. Got it fairly cheap on eBay used a few years ago.  They are much cheaper now. Solid card with solid raid 6 performance.

ლ(ಠ益ಠ)ლ
(ノಠ益ಠ)╯︵ /(.□ . \)

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Mads-Ejnar said:

Great, thanks for the help :) 

make sure to get the battery if you want a raid card, it will make a huge performance differnce

 

And really look at  software raid, its gotten much better and is better than hardware raid for most uses, and is better at protecting data with things like checksums that can be used. Its also much more flexable.

Link to comment
Share on other sites

Link to post
Share on other sites

Ending up getting the LSI 9361-8i with CacheVault and battery for 250$
The seller told the card was "New", it was pulled out from a workstation and never used.
It looked like to be a really great deal and a bit more future proof, then 9271 or 9260.
So i hope i made a ok deal and the card will be ok. All the reviews i could fine looks really good :)

You sir, have a wonderful day ?  | Sorry for my bad dyslexia English  :(

Spoiler

Gaming PC - MB: MSI MEG X570 GODLIKE - CPU: AMD 3950X - RAM: G.Skill TridentZ RGB DDR4-3733 C17 DC - 32GB - GPU: ASUS GeForce RTX 2080 Ti ROG STRIX OC - 11GB GDDR6 - Soundcard: Creative Sound Blaster ZXR - Headset: Sennheiser PC350SE - PSU: Corsair AX860i - SSD: Samsung 980PRO 1TB - SSD: Samsung 970 Evo+ 1TB

HTPC - MB: MSI B450M MORTAR - CPU: AMD 2400G - RAM: Corsair Vengeance LPX Black DDR4 3200MHz 32GB - GPU: AMD 2400G - PSU: Seasonic Prime Titanium Fanless 600W - SSD: Samsung EVO 120GB - HDD: 7x 3TB WD RED in raid 6 - Raid controller: MEGARAID LSI 9361-8I

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...
On 10/18/2018 at 7:42 AM, Mads-Ejnar said:

Ending up getting the LSI 9361-8i with CacheVault and battery for 250$
The seller told the card was "New", it was pulled out from a workstation and never used.
It looked like to be a really great deal and a bit more future proof, then 9271 or 9260.
So i hope i made a ok deal and the card will be ok. All the reviews i could fine looks really good :)

Please, please, PLEASE post transfer speeds! The LSI 9361 is arguably THE BEST SAS HBA ever made!

I hate it how NO ONE talks about how well RAID-6 performs on the newest technology... The buzzword is still the common RAID-10 and the very obsolete RAID-5. You said you were doing RAID-6 across 6x 3TB drives? I would LOVE to hear how that performs-- what's the fastest speed writing to the drives? I don't care about reads, I need to know about write performance, full sequential with at least 10GB of data.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Phas3L0ck said:

Please, please, PLEASE post transfer speeds! The LSI 9361 is arguably THE BEST SAS HBA ever made!

I hate it how NO ONE talks about how well RAID-6 performs on the newest technology... The buzzword is still the common RAID-10 and the very obsolete RAID-5. You said you were doing RAID-6 across 6x 3TB drives? I would LOVE to hear how that performs-- what's the fastest speed writing to the drives? I don't care about reads, I need to know about write performance, full sequential with at least 10GB of data.

The 9361 isn't an HBA it's a RAID controller, I have one btw and it's very good. Just note the advice and general performance that gets talked about is quite different in regards to software RAID and hardware RAID. For hardware RAID parity configurations perform far better than software does especially in the lower drive counts.

 

Most people talk about software RAID now days so be cautious of the advice that you read and are given.

 

I've got 4 10k RPM SAS disks in RAID 5 and it'll do over 1GB/s seq without much issue, worst case if you fill up the write cache is ~300 MB/s write. RAID 6 is very similar performance just slightly slower write, more due to the 1 less write spindle vs RAID 5.

 

On hardware RAID parity will out perform RAID 10 in sequential reads and writes and in a fair amount of cases in IOPs performance as well, if you're around that 50% read or higher workload type. RAID 10 is only good for heavy sustained writes for applications that are sensitive to latency. Even the redundancy benefits of RAID 10 over RAID 6 isn't really a thing, RAID 10 has the risk of losing a mirror pair which destroys the array where RAID 6 does not have that risk. Just be sensible with how many disks you put in to a single RAID 6 drive group.

 

For software RAID parity is typically bad performance and you need to stripe across many arrays/vdevs to get good performance out of such setups and by that point you might as well be doing RAID 10 rather than RAID 60 and get better performance all round by doing it.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, leadeater said:

The 9361 isn't an HBA it's a RAID controller, I have one btw and it's very good. Just note the advice and general performance that gets talked about is quite different in regards to software RAID and hardware RAID. For hardware RAID parity configurations perform far better than software does especially in the lower drive counts.

 

Most people talk about software RAID now days so be cautious of the advice that you read and are given.

 

I've got 4 10k RPM SAS disks in RAID 5 and it'll do over 1GB/s seq without much issue, worst case if you fill up the write cache is ~300 MB/s write. RAID 6 is very similar performance just slightly slower write, more due to the 1 less write spindle vs RAID 5.

 

On hardware RAID parity will out perform RAID 10 in sequential reads and writes and in a fair amount of cases in IOPs performance as well, if you're around that 50% read or higher workload type. RAID 10 is only good for heavy sustained writes for applications that are sensitive to latency. Even the redundancy benefits of RAID 10 over RAID 6 isn't really a thing, RAID 10 has the risk of losing a mirror pair which destroys the array where RAID 6 does not have that risk. Just be sensible with how many disks you put in to a single RAID 6 drive group.

 

For software RAID parity is typically bad performance and you need to stripe across many arrays/vdevs to get good performance out of such setups and by that point you might as well be doing RAID 10 rather than RAID 60 and get better performance all round by doing it.

I appreciate the insight, it's nice to finally hear from someone with real experience. At the moment I'm experimenting with an LSI 9311 and 4x 15k 450GB SAS-2 drives in RAID-10 to review performance as an OS volume, and so far it's going pretty well-- but as I read more about the stability of RAID-6, the more it would make sense to scrap RAID volumes for OS storage favor of a larger, more advanced drive cluster for long operation data storage... besides, an OS only needs a little SSD to run on and that can be easily backed up to a decent size drive array for a quick restore if the single SSD does fail. My plan is to assemble a RAID-6 array with an LSI 9361-8i using 8x 6TB drives in the following format; 6x drives in active RAID-6 with the space of 4x being usable as 24TB, and the last 2x drives will be hot-spares so drive failures can be eliminated quickly and automatically, thus potentially reducing the apparently long rebuild times of RAID-6.

 

I'm sure the answer to my next question is fairly obvious, but despite conflicting facts about the science of such drive arrays, does the config I describe stand to gain significant sequential performance over RAID-10, or is it just extra redundancy like everyone says?

 

Also, since you sound like you have a lot of experience with RAID arrays, tell me more about the known behavior of HBAs and RAID cards for how they rebuild arrays; in any RAID-1, 5, 6, or 10 array with hot spare drives, what is the reaction of the host controller when the failed drive is physically finally replaced, and what happens to the hot spare(s)? Will the hot spare(s) still be in use, or will the data be offloaded back to the correct drive as the array was configured, thus leaving the spare drive(s) blank and idle, ready for use again?

 

And what happens if a hot spare in use fails, but there is another available? Will it be immediately used by default?

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Phas3L0ck said:

At the moment I'm experimenting with an LSI 9311 and 4x 15k 450GB SAS-2 drives in RAID-10 to review performance as an OS volume, and so far it's going pretty well-- but as I read more about the stability of RAID-6, the more it would make sense to scrap RAID volumes for OS storage favor of a larger, more advanced drive cluster for long operation data storage... besides, an OS only needs a little SSD to run on and that can be easily backed up to a decent size drive array for a quick restore if the single SSD does fail.

All our servers at work use hardware RAID 1 mirrors of SSDs for OS, SSD failures are very uncommon. Before we switched to SSD mirrors we used 10k RPM SAS, 15k RPM failure rate is much higher than 10k which doesn't make it a good choice for OS or 2 disk arrays.

 

I'm always in favor of dedicated storage/disks for OS and data, management wise it's just easier.

 

2 hours ago, Phas3L0ck said:

My plan is to assemble a RAID-6 array with an LSI 9361-8i using 8x 6TB drives in the following format; 6x drives in active RAID-6 with the space of 4x being usable as 24TB, and the last 2x drives will be hot-spares so drive failures can be eliminated quickly and automatically, thus potentially reducing the apparently long rebuild times of RAID-6.

Sounds like a good plan to me, if I was tight on space I would drop down to a single hot spare but unless you actually need the space no need to do that. I also tend to favor more smaller disks when I can because of rebuild times, plus you get the added bonus of better performance.

 

2 hours ago, Phas3L0ck said:

I'm sure the answer to my next question is fairly obvious, but despite conflicting facts about the science of such drive arrays, does the config I describe stand to gain significant sequential performance over RAID-10, or is it just extra redundancy like everyone says?

So a bit of a hard one simply due to how good LSI RAID cards are with their cache modules, you'd stand to gain 200MB/s max. Realistically both configurations are going to perform similarly because write-back cache will do all the work so the biggest gains are really only in seq read. You lose a bit of performance in small block I/O writes at higher queue depths.

 

@scottyseng you still got those benchmark results from when you switched from RAID 10 to RAID 6?

 

For something like ZFS and Storage Spaces the performance gain of RAID 10 over RAID 6 is much larger, Storage Spaces greatly so, in all performance areas.

 

Personally I don't like to talk up the extra redundancy that RAID 10 in theory gives because in theory the array can be lost with as few as 2 disks failing, cascade failures are also more likely to happen during a rebuild and if the mirror partner disk fails during the rebuild then you're dead. This isn't that unlikely either since it's most likely both disks were put in to the array at the same time and come from the same production batch of disks. Before I allow a rebuild to start I run a backup first, this shouldn't take long if you're doing regular incremental backups every day. Safety first right ?.

 

Basically RAID 6 can survive any 2 disks failing and data loss at 3rd, RAID 10 can lose anywhere between 1 disk to half the disks in the array before data loss with a minimum of 2 disks to cause data loss. So RAID 6 is minimum of 3 and RAID 10 is minimum of 2, that's why unless there is a performance requirement for RAID 10 I always pick RAID 6. You should never be in a situation of 2 failed disks and no current rebuild operation already running.

 

2 hours ago, Phas3L0ck said:

Also, since you sound like you have a lot of experience with RAID arrays, tell me more about the known behavior of HBAs and RAID cards for how they rebuild arrays; in any RAID-1, 5, 6, or 10 array with hot spare drives, what is the reaction of the host controller when the failed drive is physically finally replaced, and what happens to the hot spare(s)? Will the hot spare(s) still be in use, or will the data be offloaded back to the correct drive as the array was configured, thus leaving the spare drive(s) blank and idle, ready for use again?

HBAs don't really do anything other than provide disk ports so any disks configured in an array will be done so at the OS layer i.e. ZFS. You'd have to check for the specific one because how those software options do rebuild varies between each of them, good ones actually distribute data across the disks differently to traditional hardware RAID and that allows parallel rebuild of a data from all other disks in the array which makes it much faster and less stressful on a single disk in the array.

(Similar to this but not exactly but it does explain the concept of how if you change the way in which data in placed on disks rebuilds can be faster and span multiple disks,https://www.ibm.com/support/knowledgecenter/STHGUJ_7.7.1/com.ibm.storwize.tb5.771.doc/svc_distributedRAID.html)

 

For hardware RAID rebuilds for mirrors is always taken from the partner, nested arrays don't change this. So for RAID 10 which is just a stripe of mirrors each rebuild is isolated to the mirror drive group, multiple failures across multiple drive groups are all their own rebuilds independent of other drive groups in the stripe. Parity arrays go through the process of recalculating the data bits from the parity bits spread across the disks in the array to recreate the data that should be on the failed disk, this means all write operations during the rebuild go to this new single disk being rebuild (slooooow).

 

2 hours ago, Phas3L0ck said:

And what happens if a hot spare in use fails, but there is another available? Will it be immediately used by default?

Once a hot space is used to replace an failed disk it is no longer marked as a hot spare and is a member disk of the drive group where the failed disk it replaced came from. If it were to fail then the RAID card will just look for any hot spare that is available, make it a member of the drive group and start a rebuild. If there are no hot spares left then the array is in a degraded state until you manually do something about it. The array is also in a degraded state during a rebuild.

 

If the disk that fails is a hot spare currently then it's just marked as failed and is no longer a hot spare anymore so the number of hot spares reduces by 1.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, leadeater said:

you still got those benchmark results from when you switched from RAID 10 to RAID 6?

Capture.PNG.1d6261dcc04ca0b3e70daff8793c4c05.PNG

 

Eight 4TB WD Re (7,200 RPM) drives in RAID 6.

 

Very soon about to get four 8TB drives (maybe six) in RAID6.

 

Ah, I have the previous gen LSI 9260-8i. No issues at all, except expect the lifespan of the cache vault battery to be around 2-3 years.

 

I have no idea how you guys keep your sanity with the 10k and 15k drives...haha.

 

...Oh, I just realized the original post was from October.

Link to comment
Share on other sites

Link to post
Share on other sites

@leadeater Oh, I just realized you were asking for a comparison between RAID6 vs RAID10...Sadly I deleted those results since It's been a long time since I had RAID10. I do remember RAID10 of 4 4TB WD Reds getting 350 MB/s sequential and RAID6 of the same four drives getting 450MB/s. RAID10 did have better random IO but realistically an SSD would be ideal for OS work as mentioned about. 

 

I also have the result of my six 4TB WD Reds in RAID6 (If that helps any):
image.png.0869808cd11be6701161111c21373b7d.png

 

and for kicks, I ran my other RAID6 array (10 x 4TB WD Re SAS), but it's only got 200 GB free at 99% usage...I have to say that onboard cache does miracles:

Capture2.PNG.22023c980061e91210b44d0fa0cea34a.PNG

 

Oh, you can also dramatically reduce rebuild time / RAID array initialization by setting the background task utilization from the default of 5% to 100%. It pissed me off because when I first made my RAID10 array, it took 7 days 24/7, and on the last day the power went out and I had to start over the initialization...I found that setting and changed it to 100% and boom, 7 hour initialization. Do note though that putting it to 100% will bog the speeds of the arrays on the card since the CPU is focusing more on rebuilding / initialization (Not that I've seen much slow down myself personally).

@Phas3L0ck

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, scottyseng said:

and for kicks, I ran my other RAID6 array (10 x 4TB WD Re SAS), but it's only got 200 GB free at 99% usage...I have to say that onboard cache does miracles:

Capture2.PNG.22023c980061e91210b44d0fa0cea34a.PNG

 

Oh, you can also dramatically reduce rebuild time / RAID array initialization by setting the background task utilization from the default of 5% to 100%. It pissed me off because when I first made my RAID10 array, it took 7 days 24/7, and on the last day the power went out and I had to start over the initialization...I found that setting and changed it to 100% and boom, 7 hour initialization. Do note though that putting it to 100% will bog the speeds of the arrays on the card since the CPU is focusing more on rebuilding / initialization (Not that I've seen much slow down myself personally).

@Phas3L0ck

Wait, we're still talking hardware RAID, right? what "background task utilization" setting is there? Is it in the LSI configuration software?

 

On the side, has anyone else noticed strange capacity shorts in their new RAID array?

For example, my 4x 450GB RAID 10 array on an LSI 9311 should be 900GB, but it's short by 2GB and the volume stands at 898GB, as the card seems to have taken 1GB usable space away from each disk... (they worked at full capacity outside of RAID.) Does this issue apply to RAID-6 as well, or is it specific to RAID-10?

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Phas3L0ck said:

Wait, we're still talking hardware RAID, right? what "background task utilization" setting is there? Is it in the LSI configuration software?

 

On the side, has anyone else noticed strange capacity shorts in their new RAID array?

For example, my 4x 450GB RAID 10 array on an LSI 9311 should be 900GB, but it's short by 2GB and the volume stands at 898GB, as the card seems to have taken 1GB usable space away from each disk... (they worked at full capacity outside of RAID.) Does this issue apply to RAID-6 as well, or is it specific to RAID-10?

Yes, hardware RAID. I have an LSI 9260-8i. Yep, in MegaRAID storage manager (Which you should totally get if you don't have it already)

Untitled2.png.ec08575b8d1dc32eca7e9d60564090b4.png

Untitled.png.248f2be5e6800d40792ba5db7cc1f1ae.png

 

As for capacity difference, I haven't really paid attention to that sadly. My 4TB drives have always been 3.6 ish TB so I've just have been used to that

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, scottyseng said:

Yes, hardware RAID. I have an LSI 9260-8i. Yep, in MegaRAID storage manager (Which you should totally get if you don't have it already)

Untitled2.png.ec08575b8d1dc32eca7e9d60564090b4.png

Untitled.png.248f2be5e6800d40792ba5db7cc1f1ae.png

 

As for capacity difference, I haven't really paid attention to that sadly. My 4TB drives have always been 3.6 ish TB so I've just have been used to that

Thanks, this is helpful. People like you should get paid for answering highly technical questions on forums.

 

And I don't use the $h1tty, inaccurate binary measurement that windows does, as reading capacity in binary is LONG obsolete! I prefer HexDec, ALWAYS. Hex measurement is a hell of a lot more accurate by providing the actual physical volume down to the last byte. (and speaking of, what's up with networking and the whole bits and bytes confusion, just use bytes already, am I right!?!)

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Phas3L0ck said:

For example, my 4x 450GB RAID 10 array on an LSI 9311 should be 900GB, but it's short by 2GB and the volume stands at 898GB, as the card seems to have taken 1GB usable space away from each disk... (they worked at full capacity outside of RAID.) Does this issue apply to RAID-6 as well, or is it specific to RAID-10?

This is because there is a difference between GB and GiB, HDD specs are GB and Windows OS and other applications use GiB. Basically base 10 vs Base 2 math, HDDs are not the size you think they are. SSDs on the other hand spec wise are GiB or Base 2 and match the OS so a 250GB SSD is in fact actually 250GB in common PC speak.

https://www.linkedin.com/pulse/20140814132922-176099595-mb-vs-mib-gb-vs-gib

https://en.wikipedia.org/wiki/Gibibyte

 

Base 2 is the correct form for computers as computers are in their most base form binary, transistor on or off, bit value 0 or 1 etc. Using Base 10 math for computers is really not correct at all, this is different to actually displaying the values in Hex or Decimal etc.

 

HDDs are the only ones who use Base 10 btw, everyone else uses Base 2. HDD manufactures need to be slapped for their confusing stupidity.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

This is because there is a difference between GB and GiB, HDD specs are GB and Windows OS and other applications use GiB. Basically base 10 vs Base 2 math, HDDs are not the size you think they are. SSDs on the other hand spec wise are GiB or Base 2 and match the OS so a 250GB SSD is in fact actually 250GB in common PC speak.

https://www.linkedin.com/pulse/20140814132922-176099595-mb-vs-mib-gb-vs-gib

https://en.wikipedia.org/wiki/Gibibyte

 

Base 2 is the correct form for computers as computers are in their most base form binary, transistor on or off, bit value 0 or 1 etc. Using Base 10 math for computers is really not correct at all, this is different to actually displaying the values in Hex or Decimal etc.

 

HDDs are the only ones who use Base 10 btw, everyone else uses Base 2. HDD manufactures need to be slapped for their confusing stupidity.

Sorry friend, but you're EXTREMELY WRONG in this case. I only use Hex measurements, NOT BINARY. My 450GB drives are in fact 450.001GB on actual BYTE SIZE outside of RAID. Then the controller seems to take 1.05GB away from each drive, making the final volume 897.9GB exactly in pure byte form. And I don't mean to sound rude because I am still asking, but you didn't answer my question yet.

 

And FYI, every single drive I've ever had always showed the entire amount of space it advertised in total byte count. So if people would drop binary and use hex like they should, they'd get the actual size. This is worse than people using Fahrenheit and Celsius; Fahrenheit IS MORE ACCURATE!

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Phas3L0ck said:

Sorry friend, but you're EXTREMELY WRONG in this case. I only use Hex measurements, NOT BINARY. My 450GB drives are in fact 450.001GB on actual BYTE SIZE outside of RAID. Then the controller seems to take 1.05GB away from each drive, making the final volume 897.9GB exactly in pure byte form. And I don't mean to sound rude because I am still asking, but you didn't answer my question yet.

 

And FYI, every single drive I've ever had always showed the entire amount of space it advertised in total byte count. So if people would drop binary and use hex like they should, they'd get the actual size. This is worse than people using Fahrenheit and Celsius; Fahrenheit IS MORE ACCURATE!

To answer the question then, that's the RAID header placed on each of the disks, that's how the controller knows which disks are in arrays and what the configuration is. Every disk has the full array header information, the controller doesn't actually store the array information because that would be a design flaw. If the controller were to fail you'd have no ability to recover the array.

https://support.siliconmechanics.com/portal/kb/articles/clearing-raid-metadata

 

As for the other part, please never use Base 10 units, for computers it's just not correct to do so. You can display what ever you like in Hex but the Base of the math leads to many problems, did you even read any of the links? It is 100% true that a 1TB HDD is using Base 10 notation or 1000000000000 bytes meaning 1TB but this is not the correct way to do it. That's why in @scottyseng pictures his 4TB HDDs are actually 3.639TB (TiB), because literally everything else uses Base 2 maths other than HDDs. They stand alone as the single thing using Base 10 units, memory industry uses Base 2, networking Base 2, operating systems Base 2.

 

image.png.21065d4f33030202f91eda8def019b62.png

1KB of memory is 1024 byes

1KB/s network throughput is 1024 bytes per second

1KB file is 1024 bytes

1KB of HDD advertised size is 1000 bytes.

 

Anyone using Base 10 units for computers is just doing it wrong because computers work in Base 2, everyone works in Base 2 but HDDs. Using Hex actually has nothing to do with it, IPv6 is displayed in Hex but is still converted down to Base 2 (binary). Outside of computers sure Base 10 is fine and is what we are used to but never bring that in to the computer industry.

 

P.S. Your confused about accuracy, both Celsius and Fahrenheit are both are equal in accuracy, no difference in that regard. As to which I prefer, Celsius of course because I live in a metric country and have never used F ever.

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, leadeater said:

To answer the question then, that's the RAID header placed on each of the disks, that's how the controller knows which disks are in arrays and what the configuration is. Every disk has the full array header information, the controller doesn't actually store the array information because that would be a design flaw. If the controller were to fail you'd have no ability to recover the array.

https://support.siliconmechanics.com/portal/kb/articles/clearing-raid-metadata

Thank you, that helps. I was wondering how that worked when I built the array, but there's not much out there that details the behavior of hardware RAID for storing info to identify which drive has what and where. For the first time in ages, I now have all I need-- at least for this subject.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×