Jump to content

FreeNAS Harddrive Configuration. Mainly software setup.

Go to solution Solved by brwainer,

ignore my previous posts about unRAID, I understand what they are doing now. A normal RAID5 stripes an incoming file across all the storage disks, and computes the parity of the whole "row" of blocks. unRAID stores the file on just one disk,a nd computes the parity of the whole "row", but said row contains multiple files instead of just one file (this is simplified, a block isn't have to be just one file nor does one file fit in a single block sometimes)

as far as the original request goes, for non-redundant storage that keeps the data of the remaining drives in the event that one drive dies, this doesn't exactly fit the bill (because it IS redundant, although files are only stored on one drive each). But in terms of the most efficient way to use 4x4TB drives and 1x2TB drive, this might work the best. now if I understand properly, you still have to sacrifice a 4TB drive as the parity, but this would give you 18TB out of 22TB, versus I'm not sure what the best you could do with FreeNAS would be - you might be able to get 18/22 if using two separate RAID-Z1s made out of 4x2TB and 5x2TB.

Hello everybody.

I really need some of your expertise, maybe even the mighty server guy - @LinusTech.

So here's the deal. A few months ago I bought a Synology DS216se budget, honestly very bad, NAS. So I was wondering about buying a new one, DIY. Here's a list of the parts I chose:

  • NZXT Tempest 210 - Case
  • Xigmatek 600W - PSU, Chinese 
  • 2 x Unbranded 4GB RAM
  • 3/4 x WD Red
  • Zalman FX70 - Fanless cooler
  • Intel Celeron G1840

Please feel free to recommend other hardware, but my real question comes here:

I want to store lots of movies, TV Series, Backups and other files. What I really want to is have these in a non-redundant setup. What would be really cool, if these were seen as one big harddrive to the user, and if one the drives fail, the data on the HDD is lost, but the others untouched. This should be completely transparent to the user.

Hope you can help. Thanks,

Simon :)

What I say, I believe is real. I surrender when you prove me wrong, I love you btw

Link to comment
Share on other sites

Link to post
Share on other sites

I'm no expert but from what I found out, the rebuild time (when a drive dies) in a raid 5/6 is a lot slower to something like a raid 10 config.

That's why I chose to configure my drives in a raid 10 in FreeNAS.

 

May I also suggest in using ECC ram, it'll bump up your cost because you'll have to buy a supporting motherboard, but I think it'll be worth it.

EDIT: If you've already bought the hardware then don't worry about it.

    CPU: 3930k  @ stock                                  RAM: 32GB RipjawsZ @ 2133Mhz       Cooling: Custom Loop
MOBO: AsRock x79 Extreme9                      SSD: 240GB Vertex 3 (OS)                     Case: HAF XB                     LG 34um95 + Ergotron MX Arm Mount - Dual Review
  GPUs: Gigabyte GTX 670 SLI                     HDD: 1TB WD Black                                PSU: Corsair AX 860                               Beyerdynamic - Custom One Pro Review

Link to comment
Share on other sites

Link to post
Share on other sites

@hoppa I think RAID 5 would be a good bet, or perhaps ZFS1. The problem is, I lose about 4tb of space, but I want it all. 

What I say, I believe is real. I surrender when you prove me wrong, I love you btw

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, simsedestroyer69 said:

@hoppa I think RAID 5 would be a good bet, or perhaps ZFS1. The problem is, I lose about 4tb of space, but I want it all. 

I know, it's unfortunate. I have 16TB (4x4TB WD red) and I can only use 8TB.

Also, FreeNAS requires a fair bit of RAM so make sure you have 1GB of RAM for every TB of storage you have. I use 8GB for 16TB (8TB usable) and it seems fine, but I think 16GB would be recommended for me.

    CPU: 3930k  @ stock                                  RAM: 32GB RipjawsZ @ 2133Mhz       Cooling: Custom Loop
MOBO: AsRock x79 Extreme9                      SSD: 240GB Vertex 3 (OS)                     Case: HAF XB                     LG 34um95 + Ergotron MX Arm Mount - Dual Review
  GPUs: Gigabyte GTX 670 SLI                     HDD: 1TB WD Black                                PSU: Corsair AX 860                               Beyerdynamic - Custom One Pro Review

Link to comment
Share on other sites

Link to post
Share on other sites

What you are describing is a JBOD, which basically links all the hard drives one after another in order as one big hard drive, without actually striping data across the drives (which would be RAID 0). THis sounds great in theory, but there are two reasons why I would strongly urge you not to use this setup:

1. If the drive holding the data for the filesystem (NTFS, ext4, etc) dies, then yes the data on the rest of the drives is intact, but you've lost the table saying "File ABCD.mp4 is bits 432798457325983 through 21937284932754890375" - which means to access your data, you'd have to run a data recovery tool on the remaining drives. A filesystem that got around this issue would have to mirror it's file table across each drive within the JBOD. I'm not aware of any file system that does this.

2. if a file gets fragmented across multiple drives (since to the OS or to clients of a file share, it's really just one big drive), then a drive failure would remove portions of the file. This pretty much means you would loose the entire file, depending on how much data is lost.

If you really want to have all of the space, you'll have to use the drives separately with a file system on each. But what you could do is mount one drive in another. So for example normally you might have this setup (using Windows drive letters, but it works the same on *NIX and BSD systems)
D:\
E:\
F:\
G:\
if you instead use folder location mounts you can have this:
D:\
D:\drive2\
D:\drive3\
D:\drive4\
and if you create a share on drive D and manage the permissions properly, a user would be able to access the other drives through a single share. If one of the child drives die, it's mountpoint just disappears (probably leaving behind an empty folder). If the D drive dies, all you lose is it's data and the pointers to the other drives, and you can remount the remaining drives to anoter location.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, simsedestroyer69 said:

@hoppa I think RAID 5 would be a good bet, or perhaps ZFS1. The problem is, I lose about 4tb of space, but I want it all. 

I am unaware of anything called "ZFS1". If you mean RAID1, then that's not referred to anything unusual. If you mean RAID-Z1, that's the ZFS implementation of RAID5. It has some improvements over the usual type of RAID5, at the same cost of a single parity disk.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

@brwainer Extremely good points. I might go ahead with 4 x Wd Red 4TB in ZFS-1, and 16 GB of RAM. Why might one need ECC RAM? How big of a data loss can it cause?

What I say, I believe is real. I surrender when you prove me wrong, I love you btw

Link to comment
Share on other sites

Link to post
Share on other sites

Oh crap, just realized you wanted this in a non-redundant setup haha

    CPU: 3930k  @ stock                                  RAM: 32GB RipjawsZ @ 2133Mhz       Cooling: Custom Loop
MOBO: AsRock x79 Extreme9                      SSD: 240GB Vertex 3 (OS)                     Case: HAF XB                     LG 34um95 + Ergotron MX Arm Mount - Dual Review
  GPUs: Gigabyte GTX 670 SLI                     HDD: 1TB WD Black                                PSU: Corsair AX 860                               Beyerdynamic - Custom One Pro Review

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Hoppa said:

Oh crap, just realized you wanted this in a non-redundant setup haha

It's okay. It's a really unusual and not recommended setup, I realize.

What I say, I believe is real. I surrender when you prove me wrong, I love you btw

Link to comment
Share on other sites

Link to post
Share on other sites

Since we're on the topic. Could you do 4x 4TB HDDs and 1x 2TB in the same RAID?

What I say, I believe is real. I surrender when you prove me wrong, I love you btw

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, simsedestroyer69 said:

@brwainer Extremely good points. I might go ahead with 4 x Wd Red 4TB in ZFS-1, and 16 GB of RAM. Why might one need ECC RAM? How big of a data loss can it cause?

again, not aware of ZFS-1. I think you mean RAID-Z1. so here are two things to think about:
if one of your drives dies, and all of your drives have been through the same abuse for the same length of time, chances are much higher than normal that another drive is going to die very soon (there has been real math behind this, but I don't have a source on hand). Doing a parity rebuild puts a huge amount of stress on the remaining drives, since every bit on every drive has to be read. It is not uncommon (look up horror stories) for a second drive to die during a RAID rebuild. If you only have RAID5 / RAID-Z1, you're dead, you have no data at all. This is why RAID6 and RAID-Z2 were invented. Alternatively, instead of rebuilding an array with a dead drive, some people advocate leaving that array in a degraded state and immediately copy all data to a fresh set of drives. This is about the same amount of stress on the drives, and then you don't have to worry about the remaining drives dying any day now in the short term. with 4 drives, the best overall solution is RAID10, unless you are willing to risk losing all data in a few years time when your drives all reach their life expectancy at about the same time.

as for ECC, for RAID 0, 1, and 10 it isn't really needed. But for RAID5/6/Z1/Z2/etc it is essential. Your parity calculations are going to be done in RAM, and if a bit gets flipped then, it won't cause immediate issues, but later on when you do a rebuild, enough mixed up parity bits can really screw up your rebuilt array.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, simsedestroyer69 said:

Since we're on the topic. Could you do 4x 4TB HDDs and 1x 2TB in the same RAID?

How well this works depends on the system. you can definitely have said drives in teh same pool, for ZFS or Windwos Storage Spaces, and then you can make a 5 drive RAID out of 2TB of space and a 4 drive RAID out of the other 2TB on the 4TB drives. In Windows Storage Pools you can actually use all the capacity of the drives in a single virtual disk, I'm not sure if FreeNAS can do that. Traditional RAID would only be able to use as much space as the smallest drive.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, simsedestroyer69 said:

It's okay. It's a really unusual and not recommended setup, I realize.

Yeah I believe there's a way to combine HDD in freeNAS without striping.

I'll have to look more into it though.

1 minute ago, simsedestroyer69 said:

Since we're on the topic. Could you do 4x 4TB HDDs and 1x 2TB in the same RAID?

So you want to combine 4x4TB and a 2TB to show up as 1 drive? And no redundancy.

Just like what @brwainer said, that'd be a JBOD - you won't be able to raid 0 them because the 4TB disks will act as a 2TB disk (the smallest disk in the array)

Using ZFS in FreeNAS you can combine a bunch of different sized disks and it will combine them (but there will be redundancy, thus half your space will be usable.)

    CPU: 3930k  @ stock                                  RAM: 32GB RipjawsZ @ 2133Mhz       Cooling: Custom Loop
MOBO: AsRock x79 Extreme9                      SSD: 240GB Vertex 3 (OS)                     Case: HAF XB                     LG 34um95 + Ergotron MX Arm Mount - Dual Review
  GPUs: Gigabyte GTX 670 SLI                     HDD: 1TB WD Black                                PSU: Corsair AX 860                               Beyerdynamic - Custom One Pro Review

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Hoppa said:

Yeah I believe there's a way to combine HDD in freeNAS without striping.

I'll have to look more into it though.

So you want to combine 4x4TB and a 2TB to show up as 1 drive? And no redundancy.

Just like what @brwainer said, that'd be a JBOD - you won't be able to raid 0 them because the 4TB disks will act as a 2TB disk (the smallest disk in the array)

Using ZFS in FreeNAS you can combine a bunch of different sized disks and it will combine them (but there will be redundancy, thus half your space will be usable.)

he didn't just ask for no redundancy, he asked for if a single drive died, the files on the remaining drives would be okay. so that's another reason it can't be RAID0.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Very interesting stuff. It seems unRAID has transparent JBOD. So to me it would look like one harddrive but it's actually multiple different. unRAID

What I say, I believe is real. I surrender when you prove me wrong, I love you btw

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, simsedestroyer69 said:

Very interesting stuff. It seems unRAID has transparent JBOD. So to me it would look like one harddrive but it's actually multiple different. unRAID

yes, but unless they certify that they've gotten past the issues I wrote about above (file system not being replicated on each drive, files able to fragment across the drives) then it isn't worth using.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, brwainer said:

yes, but unless they certify that they've gotten past the issues I wrote about above (file system not being replicated on each drive, files able to fragment across the drives) then it isn't worth using.

Very true

What I say, I believe is real. I surrender when you prove me wrong, I love you btw

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, simsedestroyer69 said:

Very true

I never really read up on unraid before. but looking at the page you linked, it isn't a true JBOD because it still does some parity calculation. it looks like the only real difference to their solution versus RAID5 is that all the parity bits are on a single drive, versus normal RAID5 which has the parity bits round-robin across the drives for better throughput.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

when a drive fails the parity compares the binary data on all the drives

this section in the unRAID page linked above makes me think they really do spread out the parity information across the drives

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

ignore my previous posts about unRAID, I understand what they are doing now. A normal RAID5 stripes an incoming file across all the storage disks, and computes the parity of the whole "row" of blocks. unRAID stores the file on just one disk,a nd computes the parity of the whole "row", but said row contains multiple files instead of just one file (this is simplified, a block isn't have to be just one file nor does one file fit in a single block sometimes)

as far as the original request goes, for non-redundant storage that keeps the data of the remaining drives in the event that one drive dies, this doesn't exactly fit the bill (because it IS redundant, although files are only stored on one drive each). But in terms of the most efficient way to use 4x4TB drives and 1x2TB drive, this might work the best. now if I understand properly, you still have to sacrifice a 4TB drive as the parity, but this would give you 18TB out of 22TB, versus I'm not sure what the best you could do with FreeNAS would be - you might be able to get 18/22 if using two separate RAID-Z1s made out of 4x2TB and 5x2TB.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I think I edited my previous post after you chose it as the best answer. I just made it more explanatory.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

And BTW, since unRAID is still doing parity, I would still recommend ECC RAM. sorry for the double (triple) post but I want to make sure you see this!

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Okay, sure thing :)

What I say, I believe is real. I surrender when you prove me wrong, I love you btw

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×