Jump to content

Combine disks with MDADM and LVM

As ZFS is not perfectly workable with my wish, I looked into mdadm + LVM, it seems to all work out but I would like to get external advice/confirmation.

The situation :

  • now using 3x4TB RAID5 using MDADM, filled at 5.5TB, forecast to grow shortly and put pressure to extend.
  • extra disks available 1x1TB, 1x2TB, 2x3TB
  • the NAS (Ubuntu 16.10, CPU:i3-2120T, 16GB RAM) is used in home environment and we have limited computing needs, mostly want enough space to have all medias in one place for the Plex server (running on the same machine) and just enough power to manage that and the couple of shared files beside that. There is as well Crashplan running.
  • 4 SATA ports on the motherboard (used up by the 3x4TB and SSD for OS) and one PCIe x4 to 4x SATA3.0 card on its way to add more SATA ports.

The plan :

  • (4TB + 4TB + 4TB) RAID5 => /dev/md0 (already in place)
  • (1TB + 2TB) RAID0 => /dev/md1 (2TB disk would then be splitted in two 1TB partitions to allow the RAID0)
  • (3TB + 3TB + /dev/md1) RAID5 => /dev/md2

Then

  • create LVM pool on /dev/md2
  • copy all data from /dev/md0 to /dev/md2 (I can move a couple hundred GB to the laptop or external HDD if /dev/md2 is too small).
  • format /dev/md0
  • add /dev/md0 in the LVM group
  • enjoy my 14TB LVM + RAID5 array (give or take some overhead)

now come the questions

  1. Is that possible at all?
  2. what are the risks of such implementation? I see here both /dev/md0 and /dev/md2 being 1 disk fault tolerant, /dev/md1 being just a cheap way to avoid buying a new disk so I do not see much of an increased risk compared with the existing situation (except the fact that I have now 2 arrays).
  3. Any specific care for the migration process?
  4. Performance-wise, what is the expected overhead, if any, compared to my existing 3x4TB RAID5 (mdadm) setup?
  5. How difficult would that be to replace /dev/md1 by a physical 3TB disk later down the road?
  6. How portable is that storage pool? (I will surely replace sooner or later the Motherboard+CPU+RAM when I go for a 4k TV :) ) So, will that be possible to move all those disks to a new system?
  7. Concerning the expansion, if I just add a 4TB drive in /dev/md0, will LVM handles this properly and gives me the extra 4TB in the pool? My plan is rather to either replace one of the arrays by one with bigger disks or to add another 3 disks array in the pool (physical space will then be the concern).

 

Thanks already

Link to comment
Share on other sites

Link to post
Share on other sites

For your use, id stay away from mdadm.

 

Id look into something like snapraid or btrfs

 

Snapraid would just store the data on the disks, and have the biggest drive for parity. UnionFS will make all of the drive one file system.

 

 

Btrfs can let you use raid with mismatches drives. Raid 1/10/0 work fine, raid 5/6 have issues, so id make sure you have a backup if your using these, but should work. here is a space calculator http://carfax.org.uk/btrfs-usage/

 

 

For the thing you want to do, you can, its messy if anything goes wrong and performance will be bad(raid on 2 partition will perform like crap)

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

thanks for the feedback, I was expecting the performance of the 2 partitions to be lower, though I question what "crap" means. As I said, the needs are rather low. Could you give more insights?

 

I already looked into UnionFS, it bothers me a bit that it does not allow redundancy (at least, I did not saw that), same for mhddfs.

 

BTRFS has issues on RAID5/6, that does not make me feel confident to store my TB of data on it.

 

Last solution, SnapRAID is a nice one, I did not know it, thanks for the pointer, I specifically like the fact that they are build with media server as main target :)

 

Thanks for that

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Memes11 said:

already looked into UnionFS, it bothers me a bit that it does not allow redundancy (at least, I did not saw that), same for mhddfs.

You mix this with snapraid for redundancy.

 

For performance with partitions, I just tested raid 0 with 2 partitions on one drive, and am getting around 30mB/s write, compared to the 80mB/s max of the drive, this will be slow.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

IMO... Dont create a raid0 of two partitions that are on the same physical disk.

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×