Jump to content

I want to create a Raid 01 and I am running to some trouble with the setup defaulting to a Raid 10 setup. 

Here is the disk layout out put from my SSH session:

lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT NAME SIZE FSTYPE TYPE MOUNTPOINT
loop0 91M squashfs loop /snap/core/6350
loop1 91M squashfs loop /snap/core/6405
sda 931.5G disk
└─sda1 931.5G linux_raid_member part
└─md0 2.7T linux_raid_member raid0
└─md1 2.7T raid1
sdb 2.7T linux_raid_member disk
└─md1 2.7T raid1
sdc 931.5G disk
└─sdc1 931.5G linux_raid_member part
└─md0 2.7T linux_raid_member raid0
└─md1 2.7T raid1
sdd 931.5G disk
└─sdd1 931.5G linux_raid_member part
└─md0 2.7T linux_raid_member raid0
└─md1 2.7T raid1
sde 149.1G disk
├─sde1 1M part
└─sde2 149G ext4 part /

As you can see here, it is defaulting to using my sdb as the primary device rather than the mb0 array, Raid 10. Does anyone know how to configure the other way around? Oh, if it helps I am using mdadm to create the arrays. 

Link to comment
Share on other sites

Link to post
Share on other sites

Is there any particular reason you want to use raid 01 over raid 10? I actually had them mixed up myself, but some quick searching shows that raid 10 is generally accepted to be superior than raid 01.

 

Probably as a result of that, it looks like you have to manually build a raid 01 array (i.e. create two raid 0 arrays, and then build a raid 1 array out of them), whereas madam will let you just build a raid 10 array all at once.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Its cus I have 3 1 tb drives that I want to stripe, and then mirror it to a 3tb drive. mdadm seems to default to even if I first create the 0 out of the 3 1tb drives and then create a 0 of mb0 and sbd. Would you know another utility that would allow me to do a 0+1? 

Link to comment
Share on other sites

Link to post
Share on other sites

I think raid 5 might be better for what you are trying to do. With a setup like you are describing, the software is likely going to choke somewhere trying to buffer all of the data that needs to be changed on the single drive after the changes have been made on the 3 drive array, assuming it's even nice enough to try and buffer it for you. The goal of raid 1 is to keep both disks (physical or virtual) in unison at all times, so there's a good chance most software will just wait for the changes to be made to the single drive before doing anything else, which will effectively limit the 3 drive raid 0 array to single drive speeds. At least for writes.

 

Raid 5 spreads a single disk's worth of parity across all drives in the array. So any single drive can die and you don't lose any data. It would also make adding drives to the array in the future much simpler.

 

I also feel inclined to post the typical "Raid isn't a backup" bit here. If your config somehow gets mangled it's possible to lose entire array's worth of data. At least with software raid you don't need to worry about your HW raid controller failing. Alas, very few of us actually have proper data safety procedures...

Link to comment
Share on other sites

Link to post
Share on other sites

What if instead set the three at raid 0 and then use the fourth by itself as a backup instead? I just don't want that 4th drive to lose 2 tb of space if I set it up in a raid 5 configuration.

Link to comment
Share on other sites

Link to post
Share on other sites

3 of your disks (2.7TB each) will come out to 8.1TB in raid 0. So your original idea...

Oh, I just realized you can -not- do raid 1 lopsided because both arrays need to be the same size. Duh. Sorry, I was up late.

So scrap that whole lopsided raid 01 idea, it doesn't work. Now, back to a raid 0 + backup,

 

3 of your drives in raid 0 comes out to 8.1TB. Also keep in mind you only have 2.7TB of space on that 4th drive to back up data from your "I hate my data but I want it fast" raid 0 array.

 

With four drives in raid 5, the total array size will also be 8.1TB. Raid 5 won't be quite as fast as raid 10, but it will be more expandable in the future. If your goal is just bulk storage, you'd probably be fine with raid 5. If you plan on running a big database or something of the sorts, then raid 10 might be better.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 drives are 1 tb each.

1 drive is a 3 tb. 

So If I raid 5 them I would loose 2 tb on the 3tb drive. 

But If I was able to use 01, I could read and write to the 3 1 tb drives, and then mirror to the three to the single 3tb. But It looks like mdadm does not support this. Seems that I can settle with just backing up the striped array to the single drive that equals the array in capacity. So I will go with this instead. Just takes a bit more effort to automate. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, soulreaper11207 said:

3 drives are 1 tb each.

1 drive is a 3 tb.

Ah, my mistake. I overlooked that.

9 hours ago, soulreaper11207 said:

But If I was able to use 01, I could read and write to the 3 1 tb drives, and then mirror to the three to the single 3tb. But It looks like mdadm does not support this. Seems that I can settle with just backing up the striped array to the single drive that equals the array in capacity. So I will go with this instead. Just takes a bit more effort to automate. 

It should work, but it will also be a bit of a hack. Having hacks managing your data isn't something I personally like, but the decision is ultimately up to you.

I'd highly recommend picking up a 6 or 8 TB external and do manual/automated backups to that, and then throw the preexisting drives you have in JBOD. Huge externals are relatively cheap, and you could even put the external in a closet hooked up to a Pi or something.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×