Jump to content

Software vs Bios Raid..... Fight!

So my friend sold me Dell Power Edge T710 server yesterday for a hell of a price. And I was doing some testing with and and was wondering what other peoples thoughts were on the subject, since I haven't really seen anyone else post something like this, I'd figure I would. I plan to use this server as media server and a nas that will be lightly used. I currently use an old hp computer to host my nas/media server and it time for more power!!! The T710 comes that I have is currently configured with 8 x 73 GB 2.5" HDD running at 7.5k rpms, dual 6c/12t processors, 48 gb of ram. So ton of horse power even though is older hardware 40% faster then what my currently setup is. I will be swapping the out the old drives for new 1.2tb running at 10k rpms HDDs soon. 

 

So Dell servers have raid control built into the bios and most nas operating systems have software raid. I did both of my raid identical with raid 5. When I configured the drive array in the bios and ran some test and notice that the read and write speeds were garbage when compared to my Os software raid. I always herd people say never user software raid because it suck and uses allot of cpu processing power to handle the array. But I don't know which one I should use. I used Open Media Vault (OMV) as my OS. Its a super light weight and easy to manage Linux os. It's so light weight a raspberry pi can run a media server and some apps on it no problem. So when I did my testing with the bios raid I would get a max of 320Mb/s reads and around 100Mb/s write where with the OMV I would get double and triple higher with maybe an only 4% cpu extra of usage. I plan to back my server with a tape drive for long term backups just in case something actually happens. But I'm thinking of ditching the on-board raid controller for the one built into OMV. And since the server CPU only maxed out at 10% with usage I think it may just be the better option for my use case.

 

Ok so here is my questions for you.

What are your thoughts on this? Would you go with software or bios raid in this instance?

Is there and negative long term affects of using a software raid vs bios?

Should I not even bother with raid 5?

Am I just being redundant?(sorry couldn't resist the dad joke)

Link to comment
Share on other sites

Link to post
Share on other sites

A few years ago when I build my NAS, I also had to decide on software vs hardware raid... Back than I found out that software raid was abyssal if running under windows, and you had to use hardware raid to get any kind of acceptable raid array working... When running your server under Linux the picture flips a bit since the software raid was pretty good back than... I guess it is the reason why you see raid handle in software in many of the different NAS OS's today..

Depending on the load your raid will need to handle the extra power of hardware raid can be useful.

 

One of the key topics in my opinion when discussing software vs hardware raid is failure. If the hardware raid component fails you might need to get a new raidcard/motherboard to accesses the raid again. In software raid you are not as depending on your hardware. An example would be a Synology NAS. When it fails you are able to connect all the drives to a Linux box and access the data since Synology handles raid in software. If you take unRaid which isn't "real" raid you are able to accesses each file living on the different drives one at a time.

 

Personally I have made the choice for software raid so I would have better odds for recovery when disaster strikes me one day. I have plans to move from a software Raid5 array to unRaid this summer. This is to further limit the dataloss if 2 drives should fail, than I would only lose parts of the data instead of it all.

But it all depends on your use

 

TL:DR

I would pick software raid, and preferably unRaid (not a real raid array)

if you do not need the speed of raid5 (Think you need a 10GB nic to make use of the speed) 

Link to comment
Share on other sites

Link to post
Share on other sites

what raid card/hba do you have? there is no option for mobo raid on the r710.

 

is the raid battery good, it will kill performance, esp in parity raid writes if it isnt.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

it depends on what need

software is great for varying RAID alternatives such as ZFS. it can handle bigger storage drives without a limit (which a lot of RAID cards do have in order to force the user to buy a compatible drive or an upgrade that is just the same card but with a different firmware installed on it, very common on Xserve servers because apple)

downside is if the OS fucks up the volume may get damaged.

some hardware RAID cards can prevent this by checking the data coming though, if it's corrupted, it doesn't get though and saves the RAID volume from corrupted data.

doesn't save it from a broken or failed RAID card but its better then nothing.

(DON'T COMBINE THESE METHODS, THIS WILL FUCK UP THE DRIVES EVEN WORSE!!!!!!!!)

 

some even employ a hybrid solution, a HBA card manages the drives and checks the data as it comes though and keeps the data safe, the software keeps the data managed and organises it to a pre-defined configuration, 

this is a best of both worlds, if the OS fails, the HBA can stop corrupted data from reaching the drives and corrupting them, and since the HBA doesn't medle with the data and only checks it, the drives have a very low chance of being corrupted due to a HBA failure since most enterprise drives have this feature on-board as well just in case something goes wrong with the HBA or RAID card (sometimes doesn't work, depends on the situation and drives, it's meant as a last resort only).

 

HBA's aren't cheep, it's best if you go with a software solution since you are just storing media files that you are streaming to end clients.

also

DON'T.

FORGET.

TO.

BACK UP.

YOUR.

DATA!

 

this is the most common rookie mistake and possibly will be the death of this sub-fourm. i just copy my files to an external HDD and something as simple as that is better than nothing! please for the sake of your torrented media!, back it up! (thats if you torrent, i sure don't.... :D)

*Insert Witty Signature here*

System Config: https://au.pcpartpicker.com/list/Tncs9N

 

Link to comment
Share on other sites

Link to post
Share on other sites

The RAID isn't built into the BIOS on Dell servers, you boot into the RAID management at the Pre Boot menu. It just looks similar to the BIOS. 

Dell servers have proper hardware RAID. Theres no reason you couldn't use this; they're essentially branded LSI raid cards. 

BIOS Raid (Fake RAID) should only ever really be used for OS drives. Always Software RAID over Intel RST for data arrays.

 

Up to you in this case if you want to go hardware or software RAID, but don't use both. If you go software, then make sure your controller card is in passthrough mode, or swap the RAID card out for a standard HBA. 

 

Many people for media just use OMV, Rockstor and UnRAID because they're quite flexible and under hardware failures (excl drives) you can move the disks to another system to get it back up and running. It has a lower investment than hardware RAID cards as well. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, cj2tech91 said:

8 x 73 GB 2.5" HDD running at 7.5k rpms

8 HDDs on an upgraded RAID card with functioning write-back cache should be getting you over 800 MB/s read and write, anything less and it's the very basic default RAID card or there is some other issue like the cache being disabled from a failed RAID card battery.

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, Electronics Wizardy said:

what raid card/hba do you have? there is no option for mobo raid on the r710.

 

is the raid battery good, it will kill performance, esp in parity raid writes if it isnt.

 

 

There is no raid card installed so I'm guessing whatever is built in the motherboard is acting as a raid controller

 

image.png.52699e25f2b541f6bcd2109bd10602a4.png

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Salv8 (sam) said:

it depends on what need

software is great for varying RAID alternatives such as ZFS. it can handle bigger storage drives without a limit (which a lot of RAID cards do have in order to force the user to buy a compatible drive or an upgrade that is just the same card but with a different firmware installed on it, very common on Xserve servers because apple)

downside is if the OS fucks up the volume may get damaged.

some hardware RAID cards can prevent this by checking the data coming though, if it's corrupted, it doesn't get though and saves the RAID volume from corrupted data.

doesn't save it from a broken or failed RAID card but its better then nothing.

(DON'T COMBINE THESE METHODS, THIS WILL FUCK UP THE DRIVES EVEN WORSE!!!!!!!!)

 

some even employ a hybrid solution, a HBA card manages the drives and checks the data as it comes though and keeps the data safe, the software keeps the data managed and organises it to a pre-defined configuration, 

this is a best of both worlds, if the OS fails, the HBA can stop corrupted data from reaching the drives and corrupting them, and since the HBA doesn't medle with the data and only checks it, the drives have a very low chance of being corrupted due to a HBA failure since most enterprise drives have this feature on-board as well just in case something goes wrong with the HBA or RAID card (sometimes doesn't work, depends on the situation and drives, it's meant as a last resort only).

 

HBA's aren't cheep, it's best if you go with a software solution since you are just storing media files that you are streaming to end clients.

also

DON'T.

FORGET.

TO.

BACK UP.

YOUR.

DATA!

 

this is the most common rookie mistake and possibly will be the death of this sub-fourm. i just copy my files to an external HDD and something as simple as that is better than nothing! please for the sake of your torrented media!, back it up! (thats if you torrent, i sure don't.... :D)

Thanks for your input. See below. As for backing up luckily the server comes with a tape drive that works. I will be doing weekly backup with the tape drive and I have a off-site backup server  that I will be syncing this server to. OMV will be taking nightly snapshots of the os and backing up to the listed locations.  Redundancy is harsh and cruel lesson to learn. Luckily I already learned it when it didn't really matter to much. 

11 hours ago, Jarsky said:

The RAID isn't built into the BIOS on Dell servers, you boot into the RAID management at the Pre Boot menu. It just looks similar to the BIOS. 

Dell servers have proper hardware RAID. Theres no reason you couldn't use this; they're essentially branded LSI raid cards. 

BIOS Raid (Fake RAID) should only ever really be used for OS drives. Always Software RAID over Intel RST for data arrays.

 

Up to you in this case if you want to go hardware or software RAID, but don't use both. If you go software, then make sure your controller card is in passthrough mode, or swap the RAID card out for a standard HBA. 

 

Many people for media just use OMV, Rockstor and UnRAID because they're quite flexible and under hardware failures (excl drives) you can move the disks to another system to get it back up and running. It has a lower investment than hardware RAID cards as well. 

Thanks for the input. See below

8 hours ago, leadeater said:

8 HDDs on an upgraded RAID card with functioning write-back cache should be getting you over 800 MB/s read and write, anything less and it's the very basic default RAID card or there is some other issue like the cache being disabled from a failed RAID card battery.

There is no raid card. the raid controller is built in the the motherboard I guess. 

 

 

 

Conclusion: 

Thanks everyone for commenting on this and sharing your knowledge. I think I am gonna stick with the software raid. Since the server is already overkill for what it will be doing. The hit in performance with the software raid will be negligible vs the versatility and management I'll be able to get with software configuration. This a cheap and easy project that is just recycling older dormant hardware. And if the server does fail I can easily restore any and all data. This will be nothing "mission critical".

  Currently there is a array of only 8 x 72 gb HHD I'll will be setting up a trial for the next month or two to see how the server will perform before I full commit and buy higher performing and capacity drives. At the end of this trial I will be simulating a drive failure, corrupted data, Os failure, and a power failure and see how and if the recovery process will be worth the cost of using the software raid. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, cj2tech91 said:

There is no raid card. the raid controller is built in the the motherboard I guess. 

There will be one, kind of. It'll be one of these, the BIOS/RAID setup should tell you what it is.

 

Quote

PERC H200 (6Gb/s)
PERC H700 (6Gb/s) with 512MB battery-backed cache; 512MB, 1GB Non-Volatile battery-backed cache SAS 6/iR
PERC 6/i with 256MB battery-backed cache
PERC S100 (software based)
PERC S300 (software based)

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/13/2019 at 9:28 AM, MaZeeT said:

A few years ago when I build my NAS, I also had to decide on software vs hardware raid... Back than I found out that software raid was abyssal if running under windows, and you had to use hardware raid to get any kind of acceptable raid array working... When running your server under Linux the picture flips a bit since the software raid was pretty good back than... I guess it is the reason why you see raid handle in software in many of the different NAS OS's today..

Depending on the load your raid will need to handle the extra power of hardware raid can be useful.

 

One of the key topics in my opinion when discussing software vs hardware raid is failure. If the hardware raid component fails you might need to get a new raidcard/motherboard to accesses the raid again. In software raid you are not as depending on your hardware. An example would be a Synology NAS. When it fails you are able to connect all the drives to a Linux box and access the data since Synology handles raid in software. If you take unRaid which isn't "real" raid you are able to accesses each file living on the different drives one at a time.

 

Personally I have made the choice for software raid so I would have better odds for recovery when disaster strikes me one day. I have plans to move from a software Raid5 array to unRaid this summer. This is to further limit the dataloss if 2 drives should fail, than I would only lose parts of the data instead of it all.

But it all depends on your use

 

TL:DR

I would pick software raid, and preferably unRaid (not a real raid array)

if you do not need the speed of raid5 (Think you need a 10GB nic to make use of the speed) 

I will be using Open Media Vault. I'm not a big fan of unRaid. Not worth the time and money in my opinion. OMV is a simpler, similar, and open-sourced version of Debian that is super like weight and easy to use. OMV has a expandable raid system like along with ZFS options. Thanks for your input see the comment above for my conclusion. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, cj2tech91 said:

I'm not a big fan of unRaid

Neither, and if you're going for a paid option there's better ones out there. There's also community editions from many of the commercial paid options that are free too, along with the actual free/open ones that exist.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, leadeater said:

There will be one, kind of. It'll be one of these, the BIOS/RAID setup should tell you what it is.

 

 

I can't really look at it in a little but if my memory server correct I'm pretty sure its an something about  PERC H200 on the bios. And from what it looks like from other forums and stuff they are that is what it commonly shipped and not used lol

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, cj2tech91 said:

I can't really look at it in a little but if my memory server correct I'm pretty sure its an something about  PERC H200 on the bios. And from what it looks like from other forums and stuff they are that is what it commonly shipped and not used lol

Yea I'd always upgrade to the H700 when actually buying the server. Some places that sell used Dell servers take out the H700s and sell them separately to make just that little bit extra, can't blame them.

Edited by leadeater
Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, leadeater said:

Neither, and if you're going for a paid option there's better ones out there. There's also community editions from many of the commercial paid options that are free too, along with the actual free/open ones that exist.

This and also the overhead is crazy on older systems for unraid. 

2 minutes ago, leadeater said:

Yea I'd always upgrade to the H700 when actually buying the server. Some places that sell used Dell servers take out the H700s and sell them separately to make just that little bit extra, can't blame them.

Yeah i'm pretty sure I'm just gonna stick with software, this is kinda a budget build. I only really spent $150 dollars on this so far on five new 2tb drives that will be here soon. I will expand the drive pull on as need based after that which will probably be a long time since I currently only have about 4tb to store on it. Maybe when i upgrade to a newer server a couple years from now I'll give raid cards another chance for this application but for now I'm just trying to finish this budget media server build.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/14/2019 at 11:13 AM, cj2tech91 said:

I can't really look at it in a little but if my memory server correct I'm pretty sure its an something about  PERC H200 on the bios. And from what it looks like from other forums and stuff they are that is what it commonly shipped and not used lol

The H200 is the basic entry level RAID Card from Dell. It kind of sucks (It's good for RAID1 though). As @leadeater mentioned, the H700 is the full daddy upgrade w/ BBU (Battery Backup Unit) and RAM cache.

 

My Dell T410 uses the H200 with some drives in RAID1 as an VM Datastore for ESXi - it works well enough. Though I've slowly been migrating over to an IBM x3650 M4.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

So I guess there is no pass-through mode for that raid card huh?  I would have to either do raid 5 on the card or make ever disk a virtual disk

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, cj2tech91 said:

So I guess there is no pass-through mode for that raid card huh?  I would have to either do raid 5 on the card or make ever disk a virtual disk

 

The H200? if it's a discrete HBA card (in a standard PCIe slot) then you can flash it to the LSI 9211-8i P20 IT firmware

If its an integrated card (in the HBA slot) then you shouldn't flash it. I can't remember the caveats about it, but I dont think it will work in that slot if you flash it. You should get another controller in that case. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Jarsky said:

 

The H200? if it's a discrete HBA card (in a standard PCIe slot) then you can flash it to the LSI 9211-8i P20 IT firmware

If its an integrated card (in the HBA slot) then you shouldn't flash it. I can't remember the caveats about it, but I dont think it will work in that slot if you flash it. You should get another controller in that case. 

it's intergrated.... fuck

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, cj2tech91 said:

it's intergrated.... fuck

I think in that case you can flash it to IT mode with the 6GBSAS.FW from Dell, but ive never tried that config - my Dell servers have H700's and PERC6's in them. 

Apparently the integrated card is the same as the dedicated, but you'd have to put it in one of the PCIe slots to run it on the LSI firmware, and it doesnt have the right mounting bracket or length cables to do that.

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×