Jump to content

Raid 5 workaround

Go to solution Solved by Electronics Wizardy,

unraid uses software raid, so the raid levels on the motherboard don't matter(and don't use mother board raid anyways, its sucks)

 

unRAID doesn't use normal raid, it has its own custom raid solution with parity on a single disk, kinda like raid4, but pretty customized.

 

If you want traditional raid, don't run unraid, there are many other nas oses that will do traditional raid well.

 

What hardware are you using?

I know that the answer to this might be very simple, but I want to know if there are ways around this. Is it possible to use Raid 5 with unRaid OS in a theoretical NAS build if the motherboard only officially supports raid 0,1,10? 

 

Edit: Doing the Raid without a dedicated card.

Link to comment
Share on other sites

Link to post
Share on other sites

unraid uses software raid, so the raid levels on the motherboard don't matter(and don't use mother board raid anyways, its sucks)

 

unRAID doesn't use normal raid, it has its own custom raid solution with parity on a single disk, kinda like raid4, but pretty customized.

 

If you want traditional raid, don't run unraid, there are many other nas oses that will do traditional raid well.

 

What hardware are you using?

Link to comment
Share on other sites

Link to post
Share on other sites

Unraid does not to RAID in any traditional manner.  All of your drives will be in AHCI mode (not RAID) and you would setup the drives and the parity drives in unraid.

Also for arrays of any decent size I would recommend RAID 6 or dual parity.

Link to comment
Share on other sites

Link to post
Share on other sites

UNRAID is SW RAID and not HW RAID, which is better in use cases like this. 

"Anger, which, far sweeter than trickling drops of honey, rises in the bosom of a man like smoke."

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

unraid uses software raid, so the raid levels on the motherboard don't matter

 

Does windows do the same? Just wondering. And, then what's the point of motherboard raid?

 

29 minutes ago, Electronics Wizardy said:

If you want traditional raid, don't run unraid,

I don't care about what kind of raid it is till my data is safe. I just found that in unraid I could add drives later on which is good for a home build.

 

29 minutes ago, Electronics Wizardy said:

What hardware are you using? 

This is by no means a final list as I wouldn't be building it anytime soon.

 

PCPartPicker part list / Price breakdown by merchant

CPU: AMD - Ryzen 3 1200 3.1 GHz Quad-Core Processor  ($89.99 @ Newegg)
Motherboard: MSI - A320M PRO-VH PLUS Micro ATX AM4 Motherboard  ($61.99 @ Walmart)
Memory: G.Skill - NT Series 4 GB (1 x 4 GB) DDR4-2133 Memory  ($25.98 @ Newegg)
Case: Fractal Design - Node 804 MicroATX Mid Tower Case  ($75.98 @ Newegg)
Power Supply: SeaSonic - 520 W 80+ Bronze Certified Fully-Modular ATX Power Supply  ($34.99 @ Newegg)
Other: unRaid ($59.00)
Total: $347.93
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2019-03-09 22:10 EST-0500

Edited by karmanyaahm
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, karmanyaahm said:

Does windows do the same? Just wondering. And, then what's the point of motherboard raid?

Windows also has storage spaces that will work as raid and lets you add drives, have multiple raid levels and lots of other features. Motherboard raid is only there so you can boot from a raid array in windows.

 

2 minutes ago, karmanyaahm said:

This is by no means a final list as I wouldn't be building it anytime soon.

Id get a 2200g, so you have a igpu to use. Or get a althon or celeron if you just want a nas, you really don't need much gpu power.

 

Id get 8gb of ram if you can, ram works as a disk cache and just makes everything faster.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Electronics Wizardy said:

Id get a 2200g, so you have a igpu to use. Or get a althon or celeron if you just want a nas, you really don't need much gpu power. 

2200g is a good idea. Athlon and Celeron won't work since I might use Plex with 1-2 device (no 4k).

 

And I'll look into the ram.

 

Thanks for the help.?

Link to comment
Share on other sites

Link to post
Share on other sites

Long post here.  Few parts.

 

Software Raid is a dirty word.  Almost a swear word.  However, File Systems like ZFS and UnRaid have resolved almost all of the issues and for most of us are better than the dedicated Raid cards for non-enterprise work, aka prosumer stuff.  ZFS is extremely resilient, and very fault tolerant.  However, ZFS is designed to use the all of the system's resources to support a server role.  I've looked at Un-Raid, and played with it small scale and I like its easier implementation of VM features and SSD accelerators.  Though its not enough to get me to jump from FreeNAS which has been 100% rock solid once I got past the initial learning/teething issues. 

 

Windows software raid, aka Storage Spaces, is, terrible by comparison.  Hardware based Raid is better.  Storage Spaces has got some problems and I wouldn't trust my data to it at all.  Even if all I was doing was a mirrored configuration.  I'd rather run two drives separately with a Robocopy scheduled task to sync the drives.

 

Raid 5 in the traditional sense has some issues.  ZFS and UnRaid have fixed these issues.  Hardware Raid 5 has a problem in that lets say you have a 5 drive array.  If a drive fails when you replace it you need to rebuild the 5th drive.  This puts major stress on the remaining 4 drives as they need to reconstruct the 5th drive's data.  If you are working with smaller drives, its not big.  But as you get into the 4,6,8, and god forbid 10TB drives the stress on the remaining drives means its going to take a LONG time to rebuild that data. Thats a lot of heat buildup and thrashing, so the chance of a 2nd drive failure increases.

 

During the rebuild process if a 2nd drive fails that's it, kaput, your data is GONE.  ZFS, the underlying file system of FreeNAS can't save you from this either, but it is not a total loss.  You can recover some of the data, just not all.  So as you get into the larger sized disks its better to build with Raid-6.  I built a 40TB RaidZ1 volume, and as sometimes happen I had a single drive fail out under MFG warranty.  While I was replacing the drive I had no redundancy and it was an uncomfortable feeling.  Since I didn't want to wait two weeks for a RMA, I bought a new drive locally.  After I replaced it and it rebuilt, I backed up all of the data, wiped, and rebuilt the array as RaidZ2.  I feel MUCH better now if I have a failure knowing I can still lose ONE more drive.

 

If you are dealing with smaller 2TB-3TB drives, the Raid-5 vs Raid-6 argument affects you less.

 

 

 

If you are going to run a Plex VM I suggest upping the processor to something with more oomph.  I ran a FreeNAS / Plex VM based on a AMD A10-7850k.  The CPU was about 25% slower than the 2200g you listed..  It was capable of about 3 1080p Transcodes at once, but where it struggled hard core was the nightly analyze task where Plex analyzes your media for Chapter and Seek thumbnails.  It was also sluggish to respond when spooling up a 2nd or 3rd transcode.  It just didn't have the ability to handle multiple threads well.

 

When I added TV Seasons it would chug for  hours and hours to process it all.  God help it if I ripped an entire box set containing multiple seasons.  It chugged for days.

 

I think you'll be initially happy with the 2200g, but later find it doesn't give you the HP you will eventually be looking for.  I can see you out-growing it quickly.   In the end going with a better processor will save you money as the system will have a longer shelf life as your media collection grows and your demands increase.

 

I bought the the 4790k when I upgraded from the 7850k build for FreeNAS and it has been rock solid and super responsive. Its been 5 years and it's only now reaching the end of its life because it can't handle 2 4K HDR transcodes at once.  The next system I build will be capable of handling 4-5 4K HDR transcodes simultaneously.  Which means for standard 1080p content it will basically sit at idle doing nothing. For a mid-range CPU I really like the Ryzen 5 2600, I built one for my nephew and its would have made an excellent budget CPU for a Media Server.

 

 

 

Home PC: Apple M1 Mini, 16gb, 1TB, 10Gig-E.  Adobe CC and Ripping things + Daily stuff.

Gaming PC: Ryzen 7 5800x, 32GB, Nvidia RTX 3080Ti stuffed into a Corsair 380T.

Asgard the FreeNAS Plex Server: AMD EPYC 7443p 24 Core, SuperMicro H12SSL-CT Mobo, 256GB DDR4 3200mhz, Norco 4224 Rack Mount. 100TB+ TrueNAS Core.

 

Toys:

2017 Focus RS | Frozen White | Daily Driver

1989 Pontiac TransAm | GM Triple White | Heads/Cammed LT1 + T56 swap | Suspension goodies up the wazoo. | HPDE Weekend Warrior toy.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, Thirdgen89GTA said:

Windows software raid, aka Storage Spaces, is, terrible by comparison.  Hardware based Raid is better.  Storage Spaces has got some problems and I wouldn't trust my data to it at all.  Even if all I was doing was a mirrored configuration.  I'd rather run two drives separately with a Robocopy scheduled task to sync the drives.

What don't you like about storage spaces? Its pretty powerful and fast, and is pretty darn good at keeping data safe.

 

The gui sucks in windows, and there are a lot of things to know about to get it working right, but its not bad.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/10/2019 at 5:27 PM, Thirdgen89GTA said:

Windows software raid, aka Storage Spaces, is, terrible by comparison.  Hardware based Raid is better.  Storage Spaces has got some problems and I wouldn't trust my data to it at all.  Even if all I was doing was a mirrored configuration.  I'd rather run two drives separately with a Robocopy scheduled task to sync the drives.

Windows Software RAID still actually exists and is configured through disk manager, it's completely legacy now and yea was always crap. Storage Spaces on the other hand is very good, very resilient and can be imported across different systems and OS installs as all the configuration information about the pool is stored on every disk. A properly configured very high end server running storage spaces is capable of delivering 45 GB/s at 2MB block size or 1 million+ IOPs at 4KB block size

 

Hardware RAID hits rather big limitations when moving in to all SSD arrays which is where software implementations like ZFS, Storage Spaces, btrfs etc take over as being better choices. Actually good hardware RAID controllers with flash cache modules are rather expensive too and you can spend that money on more SSDs for more capacity and performance.

 

It comes down to needs/requirements assessment because no matter how technically good one solution is it might not be the best choice due to other more important factors.

 

I personally still prefer hardware RAID in a lot of cases, it performs vastly better than ZFS and Storage Spaces where parity configurations are concerned on systems with limited resources. ZFS is great as a dedicated storage server, for home use and home labs that's quite a bit of hardware to dedicate when you can spend similar or much less on a good hardware RAID controller for the same performance while being able to use any OS/hypervisor on the system and run many VMs and services off that server.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×