Jump to content

How to create a RAID-Z2 and add drive later (Truenas Scale)

Hi, 

I bought 5x 18TB drives, and I plan to set them up in RAID-Z2 on my Truenas Scale system.

Unfortunately, one of the drives got damaged in shipping. The drive is detected by the system, however once I want to create and format a pool, the operation fails.

I am short on time, and I need to have the system up and running in 2 days, so I can't wait for the RMA.

I know that RAID-Z2 can handle up to 2 drives failure, so I was wondering if there is a way to set up a 5 drives RAID-Z2 pool with 4 drives, and I can just add the replaced drive once it arrives? I thought about placing 5 drives, create the pool then remove one, except I don't have an extra 18TB lying around.

Thoughts?

Link to comment
Share on other sites

Link to post
Share on other sites

You can't add a drive to a vdev after the fact, and adding a lone drive to a pool as its own vdev is an incredibly bad idea because if that one drive dies you'll lose the entire pool. (I believe arbitrarily adding drives is a feature in development, but to my knowledge it's not part of any production version of ZFS yet.)

 

You have a few options:

 

1. Build a four-drive vdev, copy your files onto it, and RMA your bad drive. When the replacement arrives, lifeboat all the data onto another drive, erase the pool, and rebuild the pool with a single five-drive RAIDz2 vdev.

 

2. Build a three-drive RAIDz1 vdev, RMA your drive, and then add its replacement plus the remaining drive you already have as a mirrored vdev on top of the existing pool. You'll be able to lose one drive out of each vdev before you're at risk of data loss (basically RAID 50).

 

3. If you can afford a sixth drive, do plan #2 but at the end build a second RAIDz1 vdev out of the three remaining drives. 

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Needfuldoer said:

You can't add a drive to a vdev after the fact, and adding a lone drive to a pool as its own vdev is an incredibly bad idea because if that one drive dies you'll lose the entire pool. (I believe arbitrarily adding drives is a feature in development, but to my knowledge it's not part of any production version of ZFS yet.)

 

You have a few options:

 

1. Build a four-drive vdev, copy your files onto it, and RMA your bad drive. When the replacement arrives, lifeboat all the data onto another drive, erase the pool, and rebuild the pool with a single five-drive RAIDz2 vdev.

 

2. Build a three-drive RAIDz1 vdev, RMA your drive, and then add its replacement plus the remaining drive you already have as a mirrored vdev on top of the existing pool. You'll be able to lose one drive out of each vdev before you're at risk of data loss (basically RAID 50).

 

3. If you can afford a sixth drive, do plan #2 but at the end build a second RAIDz1 vdev out of the three remaining drives. 

I know the risk of losing one drive, but chances of a new drive failing during the first month of use are very slim. Also it would make the setup later as simple as pluging in the extra drive.

 

  1. I don't have a lifeboat big enough to migrate data to
  2.  I can't afford a 6th drive they are hella expensive, and also it wouldn't fit in the case.

 

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Night Watcher said:

I know the risk of losing one drive, but chances of a new drive failing during the first month of use are very slim.

What about the second month? Or the third? Or fourth? Drives fail either early in their life, or after many, many hours of service. It's a bathtub curve.

 

A pool with a single-drive vdev may as well be a stripe with no redundancy at all, in my opinion. A pool is only as resilient as its weakest vdev.

 

23 minutes ago, Night Watcher said:

I can't afford a 6th drive they are hella expensive, and also it wouldn't fit in the case.

Then I think option 2 is your best bet: build a three-drive RAIDz1 now and add the other two drives (the one you don't deploy right away and the RMA replacement) to the pool as a mirrored vdev later. You'll lose the same number of drives to parity as you would with a five-drive RAIDz2, and you'll only lose data if you lose two drives from the same vdev at the same time.

 

Either that or run Unraid instead, which lets you add drives to an array one at a time. (I think it's Linux kernel RAID under the hood instead of ZFS.)

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Night Watcher said:

I know the risk of losing one drive, but chances of a new drive failing during the first month of use are very slim.

I’d argue the opposite. Early failures are probably more common than later failures. Make sure you ALWAYS burn discs in before deploying them into use. That’s where you sort out infant mortality. Run a long SMART test, then bad blocks the drives (18TB drives will take probably 5 days to run through a full bad blocks? Maybe longer?)

 

In your case, the only real option in your case is build it as a Z1 with a single drive  of redundancy. There is no way to add a drive to a vdev after the fact. This is in beta in mainline ZFS, but I’d anticipate it not being added to Truenas scale until 2025. 
 

If you add a single drive later on as its own vdev in the same pool, you effectively no longer have any redundancy…. Which is why @Needfuldoer correctly recommended to not add a singular drive. If that 1 drive dies, all data on the entire pool is lost… 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you for letting me know about burning in test, I will perform those asap!

I just learned that I can use a lower capacity drive for the 5th drive, and the pool would have the size of the smallest, however when I replace the drive later the pool will be restored to full capacity. That works perfect for me.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Night Watcher said:

Thank you for letting me know about burning in test, I will perform those asap!

I just learned that I can use a lower capacity drive for the 5th drive, and the pool would have the size of the smallest, however when I replace the drive later the pool will be restored to full capacity. That works perfect for me.

If you are doing the burn in with truenas what I do is run badblocks with a block size of 4096 and then a long smart test. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×