Jump to content

Software vs Hardware raid?

Go to solution Solved by Jarsky,
3 minutes ago, JoranC123 said:

RHEL its already running atm

 

Exactly thats whats happening so i think i ap gonna use mdadm for raid

 

I'd use mdadm as well. It's mature and robust, i've never really had problems with MD. 

You can install Webmin and use the 'Linux RAID' and permissions configuration in there to build and configure your RAID & Access. 

I think the limiting thing of some of those old ServerRAID cards is that they dont have any sort of JBOD/IT mode, which is what he's getting at with the 'RAID1' thing. 

On some of those cards you have to make each disk its own 'RAID1' for the card to display the disks as volumes to the OS. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to post
Share on other sites

4 minutes ago, Jarsky said:

I think the limiting thing of some of those old ServerRAID cards is that they dont have any sort of JBOD/IT mode, which is what he's getting at with the 'RAID1' thing. 

On some of those cards you have to make each disk its own 'RAID1' for the card to display the disks as volumes to the OS. 

Yea that's what I do with older cards that can't flash to IT mode.

 

Edit:

Cross flash should be possible, it's just another LSI OEM card with a LSI 1068e controller on it.

https://forums.unraid.net/topic/12114-lsi-controller-fw-updates-irit-modes/

Link to post
Share on other sites

14 minutes ago, JoranC123 said:

I have a BR10i raid controller but it only supports raid 0 ,1 ,1e

Are you building a NAS?

Is it going to use freeNAS?

As in that case you don't need a hardware RAID controller, as ZFS takes care of that stuff.

I edit my posts a lot, Twitter is @LordStreetguru just don't ask PC questions there mostly...
 

Spoiler

 

What is your budget/country for your new PC?

 

what monitor resolution/refresh rate?

 

What games or other software do you need to run?

 

 

Link to post
Share on other sites

13 minutes ago, leadeater said:

What OS you planning on running?

 

Linux: Do it all in software RAID

Windows: Storage Spaces or Hardware RAID (both avoid RAID5/6 without BBU or Journal SSD)

ESXi: Hardware RAID is required.

RHEL its already running atm

 

10 minutes ago, Jarsky said:

I think the limiting thing of some of those old ServerRAID cards is that they dont have any sort of JBOD/IT mode, which is what he's getting at with the 'RAID1' thing. 

On some of those cards you have to make each disk its own 'RAID1' for the card to display the disks as volumes to the OS. 

Exactly thats whats happening so i think i ap gonna use mdadm for raid

Link to post
Share on other sites

3 minutes ago, JoranC123 said:

RHEL its already running atm

 

Exactly thats whats happening so i think i ap gonna use mdadm for raid

 

I'd use mdadm as well. It's mature and robust, i've never really had problems with MD. 

You can install Webmin and use the 'Linux RAID' and permissions configuration in there to build and configure your RAID & Access. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to post
Share on other sites

8 hours ago, leadeater said:

Yea that's what I do with older cards that can't flash to IT mode.

 

Edit:

Cross flash should be possible, it's just another LSI OEM card with a LSI 1068e controller on it.

https://forums.unraid.net/topic/12114-lsi-controller-fw-updates-irit-modes/

What would flashing do?

 

Give me raid 5 ? Or direct disk acces?

Link to post
Share on other sites

2 hours ago, leadeater said:

Probably not for mdadm but if you were going to use ZFS then I would say yes, just because ZFS actually needs proper direct disk access and hardware health monitoring.

Ill be using mdadm with webmin so , btw one question tho whats raid 0?

Link to post
Share on other sites

49 minutes ago, JoranC123 said:

Ill be using mdadm with webmin so , btw one question tho whats raid 0?

RAID 0 is stripe with no redundancy. Some RAID cards will only allow single disk arrays to be RAID 0 and some want them to be RAID 1, for a single disk array there is no difference so doesn't matter for what you are doing.

Link to post
Share on other sites

16 hours ago, leadeater said:

Probably not for mdadm but if you were going to use ZFS then I would say yes, just because ZFS actually needs proper direct disk access and hardware health monitoring.

Actually, ZFS doesn't need proper disk access as is.

ZFS does not check SMART or similar. ZFS knows what to do if disk fails (as any RAID array), but additionally it can also catch 'silent errors' - like when checksum mismatch occurs. But, for all those, ZFS does not need direct disk acces.

is it cleaner to have direct disk acces? Yes.

Is there any significant problem if each disk is used as one logical volume within RAID controller? No.

 

What you will miss is reading SMART from FreeNAS or similar, but for ZFS, no issues.

 

What you do want to avoid though is using RAID on HW RAID controller itself, as ZFS handles errors better.

Link to post
Share on other sites

28 minutes ago, Nick7 said:

Is there any significant problem if each disk is used as one logical volume within RAID controller? No.

 

What you will miss is reading SMART from FreeNAS or similar, but for ZFS, no issues.

 

What you do want to avoid though is using RAID on HW RAID controller itself, as ZFS handles errors better.

True ZFS itself does not use it but the problem with single disk RAIDs is the removal of hardware health information from the OS natively so when you get a predictive failure alert that doesn't necessarily come through to the OS layer and warn you. This applies to any system implementing ZFS, not just FreeNAS, and these systems often have more disks than say an mdadm deployment would and can have 3 controllers in them.

 

The more you abstract the hardware from the system handling the data the more you increase the risk of underlying hardware faults going undetected so increase the risk of system failure/data loss.

 

That's why I say it matters for ZFS because it is impacted by this, even if the risk is low, so I wouldn't use anything other than an HBA which are extremely easy to come by cheaply.

Link to post
Share on other sites

That was my point too - ZFS does not differentiate between physical disk or if it's logical volume on RAID controller, or something else.

But, for monitoring purpose, predictive actions by OS and/or other software, having physical disk can help.

In small environments with just few hard drives, this is less of an issue, in larger environments it can be helpful.

 

I also do agree that having as little extra layers is way to go, but depending what you have for 'home lab' or just testing, just go with what you have, as it'll simply work ?

Link to post
Share on other sites

3 hours ago, leadeater said:

True ZFS itself does not use it but the problem with single disk RAIDs is the removal of hardware health information from the OS natively so when you get a predictive failure alert that doesn't necessarily come through to the OS layer and warn you. This applies to any system implementing ZFS, not just FreeNAS, and these systems often have more disks than say an mdadm deployment would and can have 3 controllers in them.

 

The more you abstract the hardware from the system handling the data the more you increase the risk of underlying hardware faults going undetected so increase the risk of system failure/data loss.

 

That's why I say it matters for ZFS because it is impacted by this, even if the risk is low, so I wouldn't use anything other than an HBA which are extremely easy to come by cheaply.

So is it diffucult to flash?

 

And will it give me more maximum capacity of a disk?

Link to post
Share on other sites

18 hours ago, JoranC123 said:

So is it diffucult to flash?

Usually simple, older the card the more difficult it gets as the firmware is harder to come by.

 

18 hours ago, JoranC123 said:

And will it give me more maximum capacity of a disk?

No, that's a limit of the SAS controller chip on the card. Anything SAS1 (3Gb) is limited to 3TB 2TB, newer SAS2 and higher no real limit you'll hit as it's way higher.

Edited by leadeater
Link to post
Share on other sites

13 hours ago, leadeater said:

Usually simple, older the card the more difficult it gets as the firmware is harder to come by.

 

No, that's a limit of the SAS controller chip on the card. Anything SAS1 (3Gb) is limited to 3TB, newer SAS2 and higher no real limit you'll hit as it's way higher.

I read somewhere that it supports up to 2TB but you say 3TB? 

Link to post
Share on other sites

  • 2 weeks later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×