Jump to content

RAID 5 Opinions

Go to solution Solved by Windows7ge,
9 minutes ago, TERINATOR461997 said:

It would just be software (Windows 10), I though about hardware but if the card dies, my array most likely will die with it.

 

Wow, I never even connected that this would also be going over the network, I was thinking of the performance I'd get when using Premier on the array, but with the network being a 500MB connection that kills the entire SSD idea.

 

In that case what RAID would be good for 4 HDDs? A software to monitor them, and they need to be redundant, I would not be happy about losing these videos. In fact, do you have a recommended HDD for this as well? 

If you planned on using Windows Storage Spaces don't use parity (RAID5/RAID6) the performance will be terrible. Windows ability to handle parity efficiently is really bad. If this was your plan you'd want to use RAID10. To make up the difference you could use 4TB HDD's so you'd have 8TB usable with 4 drives.

 

Seagate Ironwolf/Ironwolf Pro or WD Reds/Red Pro would be the route I'd go personally.

 

6 minutes ago, WereCatf said:

Not even close.

I'm not interested in starting a debate on something like this. We will have to agree to disagree.

Hello everyone, I was hoping to get some input on my Plex server, Right now my 7TB HHD pulls double for my backups and my video collection. I was thinking of moving the server to a RAID 5 array. Yes I have done my research into it, the reason I am even considering it is because I was planning on opting for 4 2TB SSDs for the array. (The server is about 2.89 TB on the disk). With SSDs there are no UREs, and some will die in a way that you can clone the drive before the reboot bricks the drive. Does anyone else think this may work fine for my use case? I was also thinking RAID 50 but I only want to have 4 drives so the array will fit in my case and not need any extra SATAs.

 

I am also thinking I should get different manufactured drives so they fail at different time instead of all the Samsung 870 dyeing together.

 

In case in matters its a Asrock AB350 Pro 4 motherboard.

Link to comment
Share on other sites

Link to post
Share on other sites

What's controlling the RAID? Hardware? Software? What OS?

 

RAID5 is depreciated and unless you have a 10Gbit network with multiple clients SSD's are kind of overkill.

Link to comment
Share on other sites

Link to post
Share on other sites

It would just be software (Windows 10), I thought about hardware but if the card dies, my array most likely will die with it.

 

Wow, I never even connected that this would also be going over the network, I was thinking of the performance I'd get when using Premier on the array, but with the network being a 500MB connection that kills the entire SSD idea.

 

In that case what RAID would be good for 4 HDDs? A software to monitor them, and they need to be redundant, I would not be happy about losing these videos. In fact, do you have a recommended HDD for this as well? 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Windows7ge said:

RAID5 is depreciated

Not even close.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, TERINATOR461997 said:

It would just be software (Windows 10), I though about hardware but if the card dies, my array most likely will die with it.

 

Wow, I never even connected that this would also be going over the network, I was thinking of the performance I'd get when using Premier on the array, but with the network being a 500MB connection that kills the entire SSD idea.

 

In that case what RAID would be good for 4 HDDs? A software to monitor them, and they need to be redundant, I would not be happy about losing these videos. In fact, do you have a recommended HDD for this as well? 

If you planned on using Windows Storage Spaces don't use parity (RAID5/RAID6) the performance will be terrible. Windows ability to handle parity efficiently is really bad. If this was your plan you'd want to use RAID10. To make up the difference you could use 4TB HDD's so you'd have 8TB usable with 4 drives.

 

Seagate Ironwolf/Ironwolf Pro or WD Reds/Red Pro would be the route I'd go personally.

 

6 minutes ago, WereCatf said:

Not even close.

I'm not interested in starting a debate on something like this. We will have to agree to disagree.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, TERINATOR461997 said:

With SSDs there are no UREs,

ummm, there are errors on ssds.

 

Id just get hdds here, more than fast enough for video and much cheaper.

 

10 minutes ago, TERINATOR461997 said:

 

Wow, I never even connected that this would also be going over the network, I was thinking of the performance I'd get when using Premier on the array, but with the network being a 500MB connection that kills the entire SSD idea.

500mB connection? There aren't any links of that speed.

 

Your probably using a 1gigabit home network.

 

10 minutes ago, TERINATOR461997 said:

In that case what RAID would be good for 4 HDDs? A software to monitor them, and they need to be redundant, I would not be happy about losing these videos. In fact, do you have a recommended HDD for this as well? 

Probably raid 5 for 4 hdds, maybe raid 10.

 

Id go red/ironwolf, or get super cheap external hdds.

 

Keep backups too.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Electronics Wizardy said:

ummm, there are errors on ssds.

 

500mB connection? There aren't any links of that speed.

Can I get your opinion on this article then?

 

https://www.kjctech.net/why-raid-5-is-ok-on-ssd-drives/

 

 

 

500MB is the max I ever see of this PC, I don't like to say my connection is a 1GB if I only get 489MB. Just like I hate the hype behind 5G, its not going to be the savoir for internet that people are making it out to be.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, TERINATOR461997 said:

Can I get your opinion on this article then?

 

https://www.kjctech.net/why-raid-5-is-ok-on-ssd-drives/

 

 

 

500MB is the max I ever see of this PC, I don't like to say my connection is a 1GB if I only get 489MB. Just like I hate the hype behind 5G, its not going to be the savoir for internet that people are making it out to be.

He's referring to the link speed of your NIC (Network Interface Card). Like I was saying unless you have a 10Gbit NIC (1.125GB/s) then the SSD's are not really worth the cost if they're just for a local 1Gbit (115MB) network connection.

 

Are you thinking of an ISP provided link? We're talking about local in your private network.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, TERINATOR461997 said:

Just like I hate the hype behind 5G, its not going to be the savoir for internet that people are making it out to be.

this isn't realated to 5g at all here, this will all be on your local network

 

1 minute ago, TERINATOR461997 said:

500MB is the max I ever see of this PC, I don't like to say my connection is a 1GB if I only get 489MB.

How are you measuring this speed? Try using iperf.

 

1 minute ago, TERINATOR461997 said:

Can I get your opinion on this article then?

 

https://www.kjctech.net/why-raid-5-is-ok-on-ssd-drives/

 

Well its one random guys blog post, so not really much behind it.

 

parity raid on ssds has the issue of more writes cause parity block , and if your going ssds might as well go raid 10 for the performance, but really depends on the use case, for this use case just get hdds.

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you everyone for the help, iperf gave a 973MB down reading, I'm willing to chalk that up to the board. I'll being using a RAID 10 with 4 4TB HDDs since I'm still inexperienced to RAID I'll be going with the advice here. Thank you!

 

One other question though, how would I swap the drives for 6TBs when the time comes to replace/upgrade the drives?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, TERINATOR461997 said:

One other question though, how would I swap the drives for 6TBs when the time comes to replace/upgrade the drives?

To my knowledge Storage Spaces allows the mixing of drive sizes so if you can't have both arrays connected at once you should be able to disconnect a 4TB, insert a 6TB rebuild the array, disconnect a 4TB, insert a 6TB rebuild the array, rinse repeat. Let's have @Electronics Wizardy make sure I'm not wrong about this though.

 

When you're done you may have to expand the volume then extend the partition (might have said that backwards...) but those are not too difficult to perform. Then the capacity you added will be usable.

Link to comment
Share on other sites

Link to post
Share on other sites

Maybe I'm old school, but I would always opt for hardware RAID over software.  There's nothing wrong with using RAID5.  It's still very commonly used in the corporate world.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Windows7ge said:

To my knowledge Storage Spaces allows the mixing of drive sizes so if you can't have both arrays connected at once you should be able to disconnect a 4TB, insert a 6TB rebuild the array, disconnect a 4TB, insert a 6TB rebuild the array, rinse repeat. Let's have @Electronics Wizardy make sure I'm not wrong about this though.

 

When you're done you may have to expand the volume then extend the partition (might have said that backwards...) but those are not too difficult to perform. Then the capacity you added will be usable.

yea thats basically it, offline drive, replace drive, add new drive, expand volume.

 

For just videos id go with collums set to 1 so upgrading is easier.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, DeaconFrost said:

There's nothing wrong with using RAID5.  It's still very commonly used in the corporate world.

I can't say there's anything seriously wrong with using RAID5 but the reason I don't recommend it is because of the potential long term ramifications. Short term it's fine but in the long haul it could screw you. If you bought all the same drives all at the same time (as most people do with the exception of data centers and the like) let's say the RAID5 holds strong for 4, 5, 6+ years. No problems. All of a sudden, finally, a drive dies. You go to replace it and start rebuilding the array. Now you have multiple aged disks running at 100% scrambling to rebuild the array. You are now in a very fragile situation as you no longer have any redundancy. Guess what. One more disk just failed while rebuilding. Now you have to dish out for a data recovery service because you didn't use 2-bit parity.

 

You could make the argument of preventative maintenance but believe me when I say a lot of small and even medium sized business don't do this, don't even ask about people who do this at home. RAID6 protects these people from themselves. That's why I don't recommend RAID5.

Link to comment
Share on other sites

Link to post
Share on other sites

Anyone running RAID5 in the enterprise has warranties on those drives and at least a one spare.  That's standard practice.  I have 4 drives running in RAID5 in a Synology NAS at home, and even I have a cold spare on the shelf.

 

As for any RAID...it's not a backup.  It never was, and never should be treated that way.  If you do, you've already set yourself up to fail.

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/28/2019 at 4:27 AM, Windows7ge said:

To my knowledge Storage Spaces allows the mixing of drive sizes so if you can't have both arrays connected at once you should be able to disconnect a 4TB, insert a 6TB rebuild the array, disconnect a 4TB, insert a 6TB rebuild the array, rinse repeat. Let's have @Electronics Wizardy make sure I'm not wrong about this though.

 

When you're done you may have to expand the volume then extend the partition (might have said that backwards...) but those are not too difficult to perform. Then the capacity you added will be usable.

The procedure is to add the new disk to the Pool then select the disk you want to take out and Retire it, Storage Spaces won't let you Retire a disk without a replacement to copy the data to. Storage Spaces just does not allow configurations and changes that put the system in to a degraded resiliency state, only hardware failures can cause that.

 

This can be a bit of a pain when all your ports are used up and you want to swap a disk etc, you can but it's a right pain and a bunch of PowerShell annoyance to do it.

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/28/2019 at 1:21 PM, Windows7ge said:

I can't say there's anything seriously wrong with using RAID5 but the reason I don't recommend it is because of the potential long term ramifications. Short term it's fine but in the long haul it could screw you. If you bought all the same drives all at the same time (as most people do with the exception of data centers and the like) let's say the RAID5 holds strong for 4, 5, 6+ years. No problems. All of a sudden, finally, a drive dies. You go to replace it and start rebuilding the array. Now you have multiple aged disks running at 100% scrambling to rebuild the array. You are now in a very fragile situation as you no longer have any redundancy. Guess what. One more disk just failed while rebuilding. Now you have to dish out for a data recovery service because you didn't use 2-bit parity.

 

You could make the argument of preventative maintenance but believe me when I say a lot of small and even medium sized business don't do this, don't even ask about people who do this at home. RAID6 protects these people from themselves. That's why I don't recommend RAID5.

The reality of why RAID5 is deemed by people to no longer be usable is related to disk sizes and rebuild times. Back when disks were typically no bigger than 600GB SAS and 2TB SATA/NL-SAS rebuild times were quick and failures weren't an issue, the problems only started to come in once 4TB became widely used but it's more so 6TB and larger where it's really a problem.

 

It's also not a good idea no matter the disk size when you go over 16 disks, I'd say 8 but it depends but if it were a new clean array then 8 should be RAID6. But in saying that there's been multiple 24 1TB disk arrays I've had to look after that ran for 8 years totally fine. Laws of averages, you should be fine but who likes to play that game ?.

 

SSDs are a much different story though, any parity RAID and I'd be questioning the reasoning altogether, might as well just throw away the SSD because you're not getting the performance. Take an extremely good implementation of RAID(like) and Copy On Write file system to not waste all of the performance.

 

SSDs and parity RAID is basically this:

lol

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

It's also not a good idea no matter the disk size when you go over 16 disks, I'd say 8 but it depends but if it were a new clean array then 8 should be RAID6. But in saying that there's been multiple 24 1TB disk arrays I've had to look after that ran for 8 years totally fine. Laws of averages, you should be fine but who likes to play that game ?.

Yeah personally I do not like RAID5's larger than ~6 disks. With the number of large clients I look after that have on-premises servers in branch locations, on average I log about 1 ticket a week with HP to replace a disk on one of the locations. Thankfully these are RAID6's so its not really a huge issue as long as we stay on top of them.

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Jarsky said:

Yeah personally I do not like RAID5's larger than ~6 disks. With the number of large clients I look after that have on-premises servers in branch locations, on average I log about 1 ticket a week with HP to replace a disk on one of the locations. Thankfully these are RAID6's so its not really a huge issue as long as we stay on top of them.

At least it's not a cheap RAID card dying. 3ware, yuck. HP support is mostly excellent in NZ though, there is one company in my area that they use as a fall back if everyone else is busy but they have been barred by us for being totally useless. When you can't put the air baffle back in a DL360/DL380, he was putting it in upside down and backwards, and couldn't figure it out it's time to leave and I'll do it.

 

Edit:

Second and final incident was a CPU replacement for a DL580.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×