Jump to content

Opinions for Ubuntu drive redundancy

Good Afternoon Everyone;

 

I have a server/nas i'm about to setup. It has currently 2x3TiB and a 1x 6TiB drive installed (nothing on them yet though) 

 

What would be the best filesystem to use for redundancy? Actual backups of 100% high value data is performed via crashplan and external drive. This would more than likely be used for drive image clones from main pc. 

 

Choices i've looked at are: 

-BTRFS with *RAID 1

-BTRFS with single (no redundancy)

-ZFS of 2x3TiB drives with the 6TiB as semi-external backup

-RAID 1 of 2x3TiB drives.

 

Any thoughts or experiences using btrfs?

 

Thank you in advance =)

Link to comment
Share on other sites

Link to post
Share on other sites

I don't know much about file systems.  But as for RAID, I'd recommend hardware RAID 10.

 

It uses the same amount of storage, as RAID 5.  But also has the speed of RAID 1, along with the redundancy of RAID 5.  RAID 10 is the overall best for this type of stuff, in short form.

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, rewird said:

What would be the best filesystem to use for redundancy?

BTRFS with RAID1 or ZFS with 6TB backup - use the redundancy for 100% uptime

or

BTRFS RAID0 for the 2x3TB with 6TB backup - use striping for more storage and speed if you do not require 100% uptime

 

I use BTRFS because I use linux and ZFS is not included in the kernel. ZFS is available for linux but i wont use it in case an upgrade breaks it.

             ☼

ψ ︿_____︿_ψ_   

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, ArduinoBen said:

I don't know much about file systems.  But as for RAID, I'd recommend hardware RAID 10.

 

It uses the same amount of storage, as RAID 5.  But also has the speed of RAID 1, along with the redundancy of RAID 5.  RAID 10 is the overall best for this type of stuff, in short form.

Wrong, RAID 10 needs 4 drives of the same size and the OP does not have that setup. Capacity is not the same as RAID 5. RAID 10 capacity is 50% of the total capacity while at RAID 5 is total capacity minus 1 drive capacity. Also redundancy is different for RAID 10. In theory 2 drives can fail but if both drives are from the same RAID 1 set, you are screwed.

And RAID 10 is essentially 2 sets of RAID 1 disks in RAID 0 stripe.  

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, SCHISCHKA said:

BTRFS with RAID1 or ZFS with 6TB backup - use the redundancy for 100% uptime

or

BTRFS RAID0 for the 2x3TB with 6TB backup - use striping for more storage and speed if you do not require 100% uptime

 

I use BTRFS because I use linux and ZFS is not included in the kernel. ZFS is available for linux but i wont use it in case an upgrade breaks it.

Just to clarify thats BTRFS Raid 1 or 0 on drive (2x3TiB) with BTRFS Single on 6TiB? In regards too uptime you mean 24/7 as opposed to say 9-5 yeah?

 

apologies, new to ubuntu and all the file systems haha.

 

Niksa is correct re:Raid config; however, btrfs still gets stuck with RAID 5/6 as far as im aware so 5/6 and 10 is out the question.

 

So BTRFS RAID 1 with both the 3TiB and leave 6TiB for backup, if redundancy. Does uptime effect drive reliability that much? If it were only on say 5am->1am would RAID 0 be okay?

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, rewird said:

Just to clarify thats BTRFS Raid 1 or 0 on drive (2x3TiB) with BTRFS Single on 6TiB? In regards too uptime you mean 24/7 as opposed to say 9-5 yeah?

by uptime i meant can/do you need to swap out a dead disk without powering off the appliance. If you cannot afford to have it off at any time, only then should you spend money on a redundant drive. I don't use any redundancy RAID levels; i have different sets of RAID0 drives that rotate so if a drive dies I have the other set to rescue from.

4 minutes ago, rewird said:

If it were only on say 5am->1am would RAID 0 be okay?

4 hours per day isnt much. good quality drives last ages. I have 6 year old WD enterprise RE drives that have outlasted the WD blue I bought.

             ☼

ψ ︿_____︿_ψ_   

Link to comment
Share on other sites

Link to post
Share on other sites

Use ZFS, BTRFS is too new in my opinion to be used for anything other than testing/non important data.

System/Server Administrator - Networking - Storage - Virtualization - Scripting - Applications

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks to everyone for help;

 

Managed to get Ubuntu server up with the 2x3TiB drives in default (RAID 0 + RAID 1), the 6TiB is in single mode (will get another 6TiB later on for RAID 1.

 

The following was really straight forwards and had 0 hiccups at all. all drives mounted and accessible from samba too.

 

For the summary or those in a similar situation:

Ubuntu Server 16.10

Samba

BTRFS

2x3TiB in Raid 0 + Raid 1 : Total Drive space ~5.5TiB

6TiB Single mode : Total Drive Space ~5.5TiB

2TiB Single (Non NAS drive (useless data/movies etc)) : ~ 1.8 TiB

120gb m.2 nVme drive for boot o/s and phoenix/web vr applications 

 

future plans are SSD cache for the system to test and play with and another 6TiB to use Raid 1 for 2x6TiB.

 

Thanks =0

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×