Jump to content

Rebuild time for 108TB (6x18TB) Synology NAS in RAID6 after drive failure

Sam I Am Not

Is there any way to calculate an estimated rebuild time for large RAID arrays when a drive fails? I found a rebuild calculator at memtest dot com but the calculator only has checkboxes up to 5TB and I don't know what you'd enter in the "Rebuild Time" field. The default "Rebuilt Time" is set to 10MB/s which seems insanely low.

 

I was planning to set up a NAS in RAID 6 (6x18TB HDDs and 2x2TB NVMe cache drives) in a Synology DS1621xs+, but now I'm having cold feet after learning about rebuild times. I've found some people posting rebuild times online but they seem to vary greatly, with some rebuild times being not too bad (per TB) and some that are alarmingly slow.

 

For example, one person said that they had a 3TB drive fail in a 4x3TB RAID 5 array and it took 33 days to rebuild, for a single 3TB drive failure. At that rate of 11 days per TB, that'd equate to 198 days for an 18TB drive which is not an option for me.

 

TLDR; I need a SAFE mass storage solution for video footage that won't take several months to rebuild if I encounter a drive failure. Right now I've got all my projects spread across 5 external HDDs (1x20TB, 1x12TB, 1x10TB, and 2x8TB) that are all in RAID0. I've been pushing my luck (knocks on wood) without a drive failure for years but I know it's just a matter of time before 1 of the 10 drives inside those 5 striped externals fails and I lose and entire drive worth of data.

 

Any help/advice is greatly appreciated!

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Sam I Am Not said:

Is there any way to calculate an estimated rebuild time for large RAID arrays when a drive fails? I found a rebuild calculator at memtest dot com but the calculator only has checkboxes up to 5TB and I don't know what you'd enter in the "Rebuild Time" field. The default "Rebuilt Time" is set to 10MB/s which seems insanely low.

 

I was planning to set up a NAS in RAID 6 (6x18TB HDDs and 2x2TB NVMe cache drives) in a Synology DS1621xs+, but now I'm having cold feet after learning about rebuild times. I've found some people posting rebuild times online but they seem to vary greatly, with some rebuild times being not too bad (per TB) and some that are alarmingly slow.

 

For example, one person said that they had a 3TB drive fail in a 4x3TB RAID 5 array and it took 33 days to rebuild, for a single 3TB drive failure. At that rate of 11 days per TB, that'd equate to 198 days for an 18TB drive which is not an option for me.

 

TLDR; I need a SAFE mass storage solution for video footage that won't take several months to rebuild if I encounter a drive failure. Right now I've got all my projects spread across 5 external HDDs (1x20TB, 1x12TB, 1x10TB, and 2x8TB) that are all in RAID0. I've been pushing my luck (knocks on wood) without a drive failure for years but I know it's just a matter of time before 1 of the 10 drives inside those 5 striped externals fails and I lose and entire drive worth of data.

 

Any help/advice is greatly appreciated!

I have never used Synology, but in truenas I can rebuild a 4 TB drive in under a day... The exact amount of time I don't remember, but last time I had to swap a drive I believe it was all done being resilvered before I woke up the next morning. The only reason I assume it took 33 days could be if the drives were SMR + the NAS itself was just garbage tier. You can totally re-write for example a 10 TB WD Red in under a day, but if it was an SMR drive it would take way, way longer.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

There are a lot of factores , but Id guess it can rebuild at 100mB/s or more, so 18tb/100mb/s = about 2 days would be my max reasonble time estimate. But this really depends on the amount of io going on at the same time and many other factors. I wouldnt worry about rebuild times here.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, LIGISTX said:

The only reason I assume it took 33 days could be if the drives were SMR

Are there still SMR drives being sold today? Sounds like they are to be avoided. I Googled "18TB Ironwolf Pro SMR" and couldn't find anything saying that they're SMR. Those are the drives I am going with. There's a good deal on them now at B&H Photo Video.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Sam I Am Not said:

5 external HDDs (1x20TB, 1x12TB, 1x10TB, and 2x8TB) that are all in RAID0

my dude.... 😲

 

I personally never had to deal with rebuilding an array but i'd highly suggest a Backup solution next to your redundant array.

That way, even IF your array takes multiple days to rebuild you can still continue working on your stuff.

 

Fortunately that is incredibly easy with Synology. So please do yourself a favor and get a second NAS to Backup your primary NAS onto.

It doesn't have to be as fancy with Raid6 and all, just something in case of an MCA. 

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Electronics Wizardy said:

There are a lot of factores , but Id guess it can rebuild at 100mB/s or more, so 18tb/100mb/s = about 2 days would be my max reasonble time estimate. But this really depends on the amount of io going on at the same time and many other factors. I wouldnt worry about rebuild times here.

Two days is nothing. Up to a month would be the longest rebuild time I'd find acceptable. By i/o, you mean the drive being in use while it's rebuilding? Guessing the rebuild is faster if it's doing nothing but the rebuild? If so, my i/o would be next to nothing if anything. My work is done 100% on NVMe storage and then archived to HDD storage after the job is wrapped.

 

Any idea if more RAM or the NVMe cache speeds up rebuild times at all? The RAM upgrades with Synology are insanely overpriced ($700 for 32GB) but if more RAM makes a big difference then I might stomach the upgrade cost.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Sam I Am Not said:

By i/o, you mean the drive being in use while it's rebuilding?

yup, rebuild is normally running at low priority so performance isn't affected during a rebuild. You can often change this priority.

 

2 minutes ago, Sam I Am Not said:

Any idea if more RAM or the NVMe cache speeds up rebuild times at all?

Shouldn't affect rebuild times, and be basically only limited by the speed of the disks. Its basically gonna be all disk io limited here.

 

4 minutes ago, Sam I Am Not said:

The RAM upgrades with Synology are insanely overpriced ($700 for 32GB) but if more RAM makes a big difference then I might stomach the upgrade cost.

You can just put generic ram in the synologies, so Id just buy third party ram for much cheaper if you need it. But adding ram shoudln't affect rebuild times.

 

If you want to save a bit Id skip those nvme drives if its just used to archive yoru files, as the cache drives shouldn't affect performance.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, FloRolf said:

my dude.... 😲

You don't know the half of it. I just spent two days going through my 29 hard drives (mix of external/internal) and making a huge document of the capacities and contents of each so I can load them all onto a single NAS solution. Out of 29 hard drives, mostly dating back to 2003 until now, I only had one drive that was dead. And surprisingly an old 160GB external firewire from 1998 (the size of a large dictionary) mounted and still worked. I for sure thought that would've not even powered up.

 

But yeah, I definitely need redundancy in my life.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Electronics Wizardy said:

If you want to save a bit Id skip those nvme drives if its just used to archive yoru files, as the cache drives shouldn't affect performance.

I regularly do single footage dumps in excess of 1TB and my understanding was that the NVMe cache would speed up those transfers since the NVMe has a write speed of around 3500MB/s, albeit limited to around 1000MB/s or whatever the limit of the 10GbE connection is. I thought the file transfers would quickly write everything to the NVMe cache and the HDDs could copy from that over time while they play catch up with their slower transfer speeds.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Sam I Am Not said:

Up to a month would be the longest rebuild time I'd find acceptable.

You wouldn't want this at all… a second drive can easily fail during that amount of time, putting you in a really bad spot. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Sam I Am Not said:

I regularly do single footage dumps in excess of 1TB and my understanding was that the NVMe cache would speed up those transfers since the NVMe has a write speed of around 3500MB/s, albeit limited to around 1000MB/s or whatever the limit of the 10GbE connection is. I thought the file transfers would quickly write everything to the NVMe cache and the HDDs could copy from that over time while they play catch up with their slower transfer speeds.

Im not sure how well the cache works for copying large files, as the cache is made for random io it seems. And Id guess the HDD array is gonna get pretty close to filling 10gbe with just the disks. Id probably try it with no cache first.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×