Jump to content

SSD RAID

Hugh54321
Go to solution Solved by Nick7,
20 minutes ago, Hugh54321 said:

I just want to know if there are issues using SSDs in raid coz I dont see any raid branded SSDs like with WD red. I will not be using a raid controller. I will be using linux software raid.

To answer simply: yes, you can use them in RAID without any issues.

Yes any drive can be used in RAID.  Even unalike drives can be RAIDed together but that is generally inadvisable

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, no point really.

If you want more speed just buy one nvme drive.

More than you'll ever need.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Enderman said:

Yeah, no point really.

If you want more speed just buy one nvme drive.

More than you'll ever need.

redundancy?

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Hugh54321 said:

redundancy?

Better to use one drive normally and another drive for regular backups.

 

Redundancy doesn't help when it's the raid controller that screws up.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Enderman said:

Redundancy doesn't help when it's the raid controller that screws up.

When RAID controller blows up, you pick another one, and you are good to go.

Link to comment
Share on other sites

Link to post
Share on other sites

When raid controller blows up you lose your data and either pay a lot of money or cry a lot or both.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

If you're doing Raid in 2019 without Unraid, you're doing it wrong lol

My rig:

CPU: Ryzen 5 3600 3.6Ghz, OC'ed to 4.2Ghz all core @ 1.25v + Corsair H60 120mm AIO

MB: Gigabyte B450 I Aorus Pro WiFi

RAM: Kingston Fury Beast RGB 32GB (2x16GB) 3600mhz CL16 (1-to-1 Infinity Fabric enabled)

GPU: Gigabyte RTX 2080 Super

*bought for $200 CAD off a friend who needed an RTX 3080, price was my reward.

CASE: InWinn A1 Plus in White with included 600w gold sfx PSU and included custom length cables

DISPLAY: 3x 20" AOC 1080p 60hz 4ms ,  32" RCA 1080p/60hz TV mounted above, all on a single arm.

 

Storage: C : 1TB WD Blue NVMe      D : 2TB Barracuda      E: 240GB Kingston V300 (scratch drive)

NAS: 240GB Kingston A400 + 6x 10+ year old 700GB Barracuda drives in my old FX8350+8GB DDR3 system

 

Logitech G15 1st Gen + Logitech G602 Wireless

Steam Controller +  Elite Series 2 controller + Logitech G29 Racing Wheel + Wingman Extreme Digital 3D Flight Stick

Sennheiser HD 4.40 Headphones + Pixel Buds 2 + Logitech Z213 2.1 Speakers

 

My Girlfriends Weeb-Ass Rig:

Razer Blade Pro 17 2020

10th Gen i7 10875H 8c/16t @5.1ghz 

17.3" 1080p 300Hz 100% sRGB, factory calibrated, 6mm bezel

RTX 2070 Max-Q 8GB

512GB generic NVMe

16GB (2x8GB) DDR4 3200Mhz

Wireless-AX201 (802.11a/b/g/n/ac/ax), Bluetooth® 5.1, 2.5Gbit Ethernet

70.5 Whr Battery

Razer Huntsman Quartz, Razer Balistic Quartz, Razer Kraken Quartz Kitty Heaphones

*deep breath*

Razer Raptor 27" monitor, IT'S BEAUTIFUL.

Link to comment
Share on other sites

Link to post
Share on other sites

There are a number of points to consider whether to put a RAID on SSDs:-

First, remember that all drives fail eventually. Even though SSDs are significantly more reliable than HDDs, there’s always a chance that they could fail. And if they do, you could be looking at major downtime and/or data loss.

Next, consider what sort of environment you’ll be using the SSDs in. Are you going to be using them for testing or development purposes? If so, RAID may be an unnecessary encumbrance that drives up costs. Will you be using the drive or drives as part of a production server system? If so, the redundancy and security associated with RAID is more important.

Thirdly, rebuild times a consideration? Can you afford the downtime that will occur if you encounter an error on one of the drives in your array? Different RAID levels come with different downtimes. The increased reliability and processing speed of SSDs means that there may be a compromise available in many scenarios. For example, you could leverage the bump in speed and reliability associated with SSDs to implement a RAID-5 or RAID-6 level solution. Even though rebuild times are longer when an error occurs in a RAID-5 or -6 environment, the reliability associated with SSDs means that the odds of a second failure occurring during rebuild are incredibly low.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Enderman said:

When raid controller blows up you lose your data and either pay a lot of money or cry a lot or both.

You apparently never used RAID controllers.

 

  1. El-cheapo, motherboard one - aka fakeraid controller:
    • You can migrate disks to another PC, even another generation of ICH controller, and disks&RAID are correctly identified.
  2. LSI RAID controller (similar for other makes):
    • If you connect disks to another controller, you can IMPORT them, thus restoring Logical Volumes on them that were previously in use.

In both cases, all your data is there. No tears, no extra money spent.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Nick7 said:

You apparently never used RAID controllers.

 

  1. El-cheapo, motherboard one - aka fakeraid controller:
    • You can migrate disks to another PC, even another generation of ICH controller, and disks&RAID are correctly identified.
  2. LSI RAID controller (similar for other makes):
    • If you connect disks to another controller, you can IMPORT them, thus restoring Logical Volumes on them that were previously in use.

In both cases, all your data is there. No tears, no extra money spent.

That assumes it just dies with no fuss.  If there is a fault that causes it to start corrupting data, it's a different story.  I think that's what was being referenced.

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Ryan_Vickers said:

That assumes it just dies with no fuss.  If there is a fault that causes it to start corrupting data, it's a different story.  I think that's what was being referenced.

Same can be told for anything - OS, software (including unRAID), motherboard itself, corrupt RAM module, etc...

This specially goes for LSI or similar 'enterprise grade' controllers - if they were so fragile, would they be used in servers and data centers all around the world?

How often you heard of RAID card corrupting data vs OS issues that caused corrupted data and/or applications corrupting data (not to mention user error)?

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Nick7 said:

Same can be told for anything - OS, software (including unRAID), motherboard itself, corrupt RAM module, etc...

This specially goes for LSI or similar 'enterprise grade' controllers - if they were so fragile, would they be used in servers and data centers all around the world?

How often you heard of RAID card corrupting data vs OS issues that caused corrupted data and/or applications corrupting data (not to mention user error)?

All of this raises a good point actually and brings it back to the original question of if RAID is really the right choice in the first place.  All those other hardware components, software bugs, as well as (perhaps most of all) user error, can all cause issues.  If you've got unlimited drives to make a fantastically reliable system, that's one thing, but if you can only have two drives, running a "main" drive and a backup drive is, I think, much better for reliability and protection against all these potential issues (and more) than running the two drives in RAID 1.

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Ryan_Vickers said:

All of this raises a good point actually and brings it back to the original question of if RAID is really the right choice in the first place.  All those other hardware components, software bugs, as well as (perhaps most of all) user error, can all cause issues.  If you've got unlimited drives to make a fantastically reliable system, that's one thing, but if you can only have two drives, running a "main" drive and a backup drive is, I think, much better for reliability and protection against all these potential issues (and more) than running the two drives in RAID 1.

Oh, I completely agree! If you have only two drives, 2nd one should be for backup, not redundancy!

Redundancy is not backup.

But this discussion was about redundancy, and my comment especially went about point made that RAID cards are almost dangerous.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Nick7 said:

Oh, I completely agree! If you have only two drives, 2nd one should be for backup, not redundancy!

Redundancy is not backup.

But this discussion was about redundancy, and my comment especially went about point made that RAID cards are almost dangerous.

With regard to that last part, would you say the situation changes at all if it was RAID 0 vs RAID 1?  In other words, would the recovery methods for the different types of controllers you mentioned before still apply or is that very much a different situation?  I wonder because for me at least, this fear of controller issues stems mainly from seeing peoples' RAID 0 arrays die not because one of the drives failed but because the RAID system itself broke down, and then just carrying this across into thinking about all of them without considering if that is applicable or not.

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, Ryan_Vickers said:

With regard to that last part, would you say the situation changes at all if it was RAID 0 vs RAID 1?  In other words, would the recovery methods for the different types of controllers you mentioned before still apply or is that very much a different situation?  I wonder because for me at least, this fear of controller issues stems mainly from seeing peoples' RAID 0 arrays die not because one of the drives failed but because the RAID system itself broke down, and then just carrying this across into thinking about all of them without considering if that is applicable or not.

Situation does not change in regard how RAID controllers handle disks and Virtual Drives, regardless if it's RAID0/1/5/6.

RAID controller writes on each disk metadata which explains in which Raid Group this disk is, LV's, etc... so Logical Volumes can be imported on another RAID card.

This is even true for fakeraid controllers - I tested this with ICHxR. In this case, I quite some time ago had RAID0 of 3 disks with one SSD as Intel caching device (Intel RST). When I bought new motherboard, I re-attached my drives to new motherboard (disks do not need to be attached in any particular order), and everything worked as before.

Proper RAID controllers create Logical Volumes (so it has more granularity). However, as I said, it's data is written on disks themselves.

Errors you mention about RAID0 seem more like user errors, or erroneous OS or software error.

I did though, many years ago, have RAID0 fail - I think on Windows 2000 have situation when software RAID0 failed in a way that it was not recognized any more. But this was like 15+ years ago.

And lastly - when you run RAID0 you need to be aware that with RAID0 your chances of RAID failing grow exponentially.

RAID0 is fine to use as scrub, or other temporary space where performance is needed. Just never expect it to be 100% safe.

Link to comment
Share on other sites

Link to post
Share on other sites

 

2 hours ago, Nick7 said:

Oh, I completely agree! If you have only two drives, 2nd one should be for backup, not redundancy!

Redundancy is not backup.

But this discussion was about redundancy, and my comment especially went about point made that RAID cards are almost dangerous.

I have five year old WD red drives. Pretty sure they will die soon. I have three newish SSDs (same size, different brand) that I am using as external drives. I only really need one external. The other two I can drop into server without any hassle. I just want to know if there are issues using SSDs in raid coz I dont see any raid branded SSDs like with WD red. I will not be using a raid controller. I will be using linux software raid.

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Hugh54321 said:

I just want to know if there are issues using SSDs in raid coz I dont see any raid branded SSDs like with WD red. I will not be using a raid controller. I will be using linux software raid.

To answer simply: yes, you can use them in RAID without any issues.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Nick7 said:

You apparently never used RAID controllers.

 

  1. El-cheapo, motherboard one - aka fakeraid controller:
    • You can migrate disks to another PC, even another generation of ICH controller, and disks&RAID are correctly identified.
  2. LSI RAID controller (similar for other makes):
    • If you connect disks to another controller, you can IMPORT them, thus restoring Logical Volumes on them that were previously in use.

In both cases, all your data is there. No tears, no extra money spent.

 

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Enderman said:

'Whonnock server died'

So, this is your argument?

Watch the video, first few minutes.

Motherboard was spontaneously shutting off. He powers it on, then it goes off again. Repeatedly.

You have important data, yet you force faulty HW.

In the end, faulty MB causes RAID card to fail too.

 

And you blame RAID card for (possible) loss of data?

 

This is completely user error. Something you do not do when you have sensitive data.

 

2nd thing: no backup? Seriously? Most important data, and no automatic (incremental) backups?

 

3rd thing: I suppose EMC, IBM, HP, NetApp, etc... are all dummies for using HW based RAID controllers in their storage systems?

 

You believe SDS is more reliable?

 

PS: For *any* kind of filesystem, RAID card, software RAID, etc... you *will* find failures. This includes even high-end storage systems. Backups. Backups. Backups.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×