Jump to content

TrueNAS HDD Recommendations

I know iX Systems recommends WD's Red Plus line of NAS HDDs for use with TrueNAS... But at the same time I've seen others use and recommend Seagate's IronWolf Pro line of NAS HDDs.

 

Which one should someone building a new NAS go with? I've used mainly WD for my past builds and am leaning on going with the Red Pro line for a new NAS build, but am open to have my mind changed.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, t3ch_n1nj4 said:

I know iX Systems recommends WD's Red Plus line of NAS HDDs for use with TrueNAS... But at the same time I've seen others use and recommend Seagate's IronWolf Pro line of NAS HDDs.

 

Which one should someone building a new NAS go with? I've used mainly WD for my past builds and am leaning on going with the Red Pro line for a new NAS build, but am open to have my mind changed.

Red non pro’s are what I and many others use. Iron wolfs are fine as well. Hell, many people have used WD greens… I wouldn’t do that personally as greens and other non NAS drives have been known to drop out of arrays randomly. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

I've only used WD red/white drives for my NAS boxes.  If the data isn't critical you can get away with non-NAS specific drives as well.

 

It all comes down to your level of risk.

"And I'll be damned if I let myself trip from a lesser man's ledge"

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, t3ch_n1nj4 said:

I know iX Systems recommends WD's Red Plus line of NAS HDDs for use with TrueNAS... But at the same time I've seen others use and recommend Seagate's IronWolf Pro line of NAS HDDs.

 

Which one should someone building a new NAS go with? I've used mainly WD for my past builds and am leaning on going with the Red Pro line for a new NAS build, but am open to have my mind changed.

How many drives are you planning to use? What configuration? (Mirrored, RAIDZ1, RAIDZ2, etc) What kind of case will they be in?

 

TrueNAS can use literally any drive that's compatible with your system hardware. My main array in TrueNAS uses 6x 3TB Toshiba drives and they've been nearly flawless (I think I've had one failure in like 10 years). My secondary array uses 2x 4TB WD Red's (not the Pro's). I just bought 4x more of the WD Red's, which I will be using soon.

 

Red Pro's just have tighter QA and a slightly better meantime between failure. They're better drives, but the difference for you might not be significant.

 

Ironwolf and Ironwolf Pro drives are essentially identical to their WD counterparts (Red and Red Pro) - so there's zero problems with substituting Red's for Ironwolf's.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, Velcade said:

I've only used WD red/white drives for my NAS boxes.  If the data isn't critical you can get away with non-NAS specific drives as well.

 

It all comes down to your level of risk.

Backups are far more important than the specific model of HDD you buy. If the data is important, make sure there are backups (and RAID does not count as a backup).

 

Increasing the fault tolerance can also help (using RAIDZ2 instead of RAIDZ1 for example).

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, LIGISTX said:

Red non pro’s are what I and many others use. Iron wolfs are fine as well. Hell, many people have used WD greens… I wouldn’t do that personally as greens and other non NAS drives have been known to drop out of arrays randomly. 

This is less of a problem with TrueNAS than it is with hardware RAID controllers.

 

The reason that happens is that TLER - Time Limited Error Recovery - is a thing on consumer drives that is very aggressive. It will wait for the drive to attempt to fix/recover from a read or write error for a long time. Enterprise drives will typically wait far less long, allowing the RAID Controller to do it's thing.

 

When a drive with a long TLER timeout is in a RAID Array, it can be dropped from the array, because the controller just assumes it's a dead drive. Green's had this issue a lot back in the day, and there was a utility you could use to adjust the TLER timeout on some WD drives - no idea if that's still a thing.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, dalekphalm said:

This is less of a problem with TrueNAS than it is with hardware RAID controllers.

 

The reason that happens is that TLER - Time Limited Error Recovery - is a thing on consumer drives that is very aggressive. It will wait for the drive to attempt to fix/recover from a read or write error for a long time. Enterprise drives will typically wait far less long, allowing the RAID Controller to do it's thing.

 

When a drive with a long TLER timeout is in a RAID Array, it can be dropped from the array, because the controller just assumes it's a dead drive. Green's had this issue a lot back in the day, and there was a utility you could use to adjust the TLER timeout on some WD drives - no idea if that's still a thing.

Many folks used that utility on greens to help them stay in ZFS arrays, but I also am not sure if it still exists/is kept updated. 
 

Greens can still be dropped from ZFS similarly to a RAID controller from my understanding, but it is less common then it was with a RAID controller. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, dalekphalm said:

How many drives are you planning to use? What configuration? (Mirrored, RAIDZ1, RAIDZ2, etc) What kind of case will they be in?

 

TrueNAS can use literally any drive that's compatible with your system hardware. My main array in TrueNAS uses 6x 3TB Toshiba drives and they've been nearly flawless (I think I've had one failure in like 10 years). My secondary array uses 2x 4TB WD Red's (not the Pro's). I just bought 4x more of the WD Red's, which I will be using soon.

 

Red Pro's just have tighter QA and a slightly better meantime between failure. They're better drives, but the difference for you might not be significant.

 

Ironwolf and Ironwolf Pro drives are essentially identical to their WD counterparts (Red and Red Pro) - so there's zero problems with substituting Red's for Ironwolf's.

I appreciate your response, and a great question to ask.

 

The NAS I'm building is going to be, to start with, just a 2 HDD build set up in a RAID 1 configuration. Case is yet to be determined, as may opt to go the eco-friendly route in using an old prebuilt system to host the server.

 

Currently leaning towards getting some 10TB WD Red Pro HDDs, as they're on sale and priced to be budget-friendly. However, WD also has their 14TB Red Pro HDDs on sale, which are currently $264 off the MSRP... So I may go with that seeing the total price difference between 2 x 10TB HDDs and 2 x 14TB HDDs is currently only $157.50.

 

I know the IronWolf HDDs are priced higher than WD's pricing for Red Pro HDDs, but if there's really no remarkable difference to quantify spending more on IronWolf HDDs, I may as well stick to plan and go with Red Pro HDDs and save a few dollars in return.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, dalekphalm said:

Backups are far more important than the specific model of HDD you buy. If the data is important, make sure there are backups (and RAID does not count as a backup).

 

Increasing the fault tolerance can also help (using RAIDZ2 instead of RAIDZ1 for example).

If I were to go with a RAIDZ2 configuration, could that work with just two matching HDDs?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, t3ch_n1nj4 said:

If I were to go with a RAIDZ2 configuration, could that work with just two matching HDDs?

No, Z2 is minimum 5 drives. 2 drives would be mirror config only. Your optimal Z2 pool would be 6 drives, but it works with 5.

I also have a list of grievances with this thread:

  1. Do NOT use WD greens for ZFS. They will die within weeks of putting data in em, so will every consumer grade SMR drive. Scrubs just don't agree with SMR.
  2. Both Seagate and WD now label which NAS drives are CMR (which is what you want for ZFS) on the product listing.
  3. WD Red Plus models are currently the best TB/$ ratio for NAS drives in Canada, with 4TB(WD40EFZX) at 95$ and 6TB(WD60EFZX) at 135$ on Amazon.

Happy hoarding!

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, dbx10 said:

No, Z2 is minimum 5 drives. 2 drives would be mirror config only. Your optimal Z2 pool would be 6 drives, but it works with 5.

I also have a list of grievances with this thread:

  1. Do NOT use WD greens for ZFS. They will die within weeks of putting data in em, so will every consumer grade SMR drive. Scrubs just don't agree with SMR.
  2. Both Seagate and WD now label which NAS drives are CMR (which is what you want for ZFS) on the product listing.
  3. WD Red Plus models are currently the best TB/$ ratio for NAS drives in Canada, with 4TB(WD40EFZX) at 95$ and 6TB(WD60EFZX) at 135$ on Amazon.

Happy hoarding!

RAID1 it is for now then!

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, t3ch_n1nj4 said:

RAID1 it is for now then!

I would read up a lot on the truenas forums before you pull the trigger on anything.... ZFS is a phenomenal file system, but fully understanding what your getting into is key. It isn't economical to scale ZFS out after the fact, you can't just add more drives to your mirror array later, so if you need more space you will need to build another vdev which has its own redundancy needs. Its important to properly scope out the build prior to starting as changes down the road become more and more difficult and exponentially more expensive as you have to build redundancy into each vdev independently. I run 10x4TB drives in RAID Z2 specifically for this reason, I only lose 20% of my total vdev size to redundancy, but if you run say 5 drive Z2, you lose 2/5th's of your space to redundancy.

 

Basically, its just a math equation of how much is downtime and data loss worth, what % redundancy do you require, what are you future scale out plans storage space wise, and what are drive costs. For example, the reason everyone says "RAID 5/Z1 is dead" is because a single drive of redundancy just isn't really enough these days with drives as large as they are. You can end up losing a drive during a rebuild (I have had this happen....) and if you only have a single drive of redundancy you are SOL. Also less redundancy means ZFS will have less information to rebuild any corrupted data.

 

But this all is determined by your need and your future plans. If you know 10TB of space will not be enough, it may be worth figuring out how to add more storage up front so you don't have to "waste" more drives then you need for redundancy across the entire pool.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, LIGISTX said:

I would read up a lot on the truenas forums before you pull the trigger on anything.... ZFS is a phenomenal file system, but fully understanding what your getting into is key. It isn't economical to scale ZFS out after the fact, you can't just add more drives to your mirror array later, so if you need more space you will need to build another vdev which has its own redundancy needs. Its important to properly scope out the build prior to starting as changes down the road become more and more difficult and exponentially more expensive as you have to build redundancy into each vdev independently. I run 10x4TB drives in RAID Z2 specifically for this reason, I only lose 20% of my total vdev size to redundancy, but if you run say 5 drive Z2, you lose 2/5th's of your space to redundancy.

 

Basically, its just a math equation of how much is downtime and data loss worth, what % redundancy do you require, what are you future scale out plans storage space wise, and what are drive costs. For example, the reason everyone says "RAID 5/Z1 is dead" is because a single drive of redundancy just isn't really enough these days with drives as large as they are. You can end up losing a drive during a rebuild (I have had this happen....) and if you only have a single drive of redundancy you are SOL. Also less redundancy means ZFS will have less information to rebuild any corrupted data.

 

But this all is determined by your need and your future plans. If you know 10TB of space will not be enough, it may be worth figuring out how to add more storage up front so you don't have to "waste" more drives then you need for redundancy across the entire pool.

I did read up about how ZFS VDEVs can't (yet, at least) be expanded after the fact. And even with the talk about expansion capabilities, issues such as dealing with wasted drive space does come at a price.

 

I'm still doing some further reading, but what could be suggested as a more suitable option than RAID1 that works in a similar mirrored manner? My aim was to ensure redundancy, in knowing if a drive were to go down that the data would remain intact on a mirrored drive.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, t3ch_n1nj4 said:

I did read up about how ZFS VDEVs can't (yet, at least) be expanded after the fact. And even with the talk about expansion capabilities, issues such as dealing with wasted drive space does come at a price.

This feature has been under code review since Q2 2021, nowhere near production ready, but it's coming!

edit: source

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, t3ch_n1nj4 said:

I'm still doing some further reading, but what could be suggested as a more suitable option than RAID1 that works in a similar mirrored manner? My aim was to ensure redundancy, in knowing if a drive were to go down that the data would remain intact on a mirrored drive.

Try and figure out how much space you will actually need... and build your server to that spec now, instead of trying to add more later. If you know you will need ~20 TB within the next year or two, don't build out 10 now... build out 30 now. The upfront cost is higher (obviously), but it will give you room to grow and not need to keep adding mirrored pairs. If you buy 5 10 TB's now, you can do RAID Z2, and get 30 TB of space (its not really 30, cuz 10 TB drives are really.... like 8.x? IIRC? And you don't want to run ZFS arrays up to more then ~85% capacity, so, maybe buy 6 drives lol).

 

This is all part of it though. You can add more later, it just costs more. But, if that is what your budget allows and the only way you can get into the server now is with 2 10 TB drives, thus is life and there is nothing wrong with that. BUT if you can do more drives up front, it may be worth while.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LIGISTX said:

Try and figure out how much space you will actually need... and build your server to that spec now, instead of trying to add more later. If you know you will need ~20 TB within the next year or two, don't build out 10 now... build out 30 now. The upfront cost is higher (obviously), but it will give you room to grow and not need to keep adding mirrored pairs. If you buy 5 10 TB's now, you can do RAID Z2, and get 30 TB of space (its not really 30, cuz 10 TB drives are really.... like 8.x? IIRC? And you don't want to run ZFS arrays up to more then ~85% capacity, so, maybe buy 6 drives lol).

 

This is all part of it though. You can add more later, it just costs more. But, if that is what your budget allows and the only way you can get into the server now is with 2 10 TB drives, thus is life and there is nothing wrong with that. BUT if you can do more drives up front, it may be worth while.

This will make me consider things more thoroughly before putting in my HDD order, and appreciate your advice and comments about best practices to keep in mind.

 

I know right now, amongst my current cloud storage and local removable storage media, I'm sitting at between 2.5TB and 3TB of stored data. I did some rough calculations, and determined that 10TB would be sufficient for my needs for at least the next 6 to 8 years. However, with the current deal on 14TB WD Red Pro HDDs, I was considering paying the $160ish difference between going with a pair of 10TB Red Pro HDDs. I know with 14TB, I'll definitely have beyond more than enough capacity for my intended use.

So, if I were to go with a three drive RAID-Z1 and one of the drives were to fail, would the stored data still be accessible after replacement of the failed drive without concerns of significant/catastrophic data loss?

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, t3ch_n1nj4 said:


So, if I were to go with a three drive RAID-Z1 and one of the drives were to fail, would the stored data still be accessible after replacement of the failed drive without concerns of significant/catastrophic data loss?

Yes, but it's a risk going with only one drive of safety when using drives this large. If a drive fails, you need to rebuild after replacing the failed drive. When doing the rebuild, the other drives are hammered quite hard and it's not uncommon to have a second drive fail. 

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, Blue4130 said:

Yes, but it's a risk going with only one drive of safety when using drives this large. If a drive fails, you need to rebuild after replacing the failed drive. When doing the rebuild, the other drives are hammered quite hard and it's not uncommon to have a second drive fail. 

Good to know these things, and if I was to change my plans from a RAID1 to a RAID-Z1 array I'd possibly go with 3 x 10TB or 4 x 8TB HDDs instead of the 2 HDDs I was planning to go with.

 

If I were to go with 4 x 8TB HDDs, would that be better as a RAID-Z2 array or keep it RAID-Z1?

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, t3ch_n1nj4 said:

Good to know these things, and if I was to change my plans from a RAID1 to a RAID-Z1 array I'd possibly go with 3 x 10TB or 4 x 8TB HDDs instead of the 2 HDDs I was planning to go with.

 

If I were to go with 4 x 8TB HDDs, would that be better as a RAID-Z2 array or keep it RAID-Z1?

Came across a RAID-Z calculator, and it answered my question regarding array type for a 4 HDD VDEV.

Now, to possibly get a fifth HDD to go with a RAID-Z2 array. Hmm...

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, t3ch_n1nj4 said:

Came across a RAID-Z calculator, and it answered my question regarding array type for a 4 HDD VDEV.

Now, to possibly get a fifth HDD to go with a RAID-Z2 array. Hmm...

Z2 is a much more secure idea. Z1 has to much probability of data loss for me…. But that’s obviously a cost vs risk thing you have to device for yourself. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, t3ch_n1nj4 said:

I appreciate your response, and a great question to ask.

 

The NAS I'm building is going to be, to start with, just a 2 HDD build set up in a RAID 1 configuration. Case is yet to be determined, as may opt to go the eco-friendly route in using an old prebuilt system to host the server.

RAID1 (mirror) is totally fine - but as noted elsewhere, you cannot convert RAID1 into RAIDZ1 or RAIDZ2 after the fact, so there is no direct and easy expansion (there is a way, but it involves creating a new duplicate array and spanning the two arrays).

19 hours ago, t3ch_n1nj4 said:

Currently leaning towards getting some 10TB WD Red Pro HDDs, as they're on sale and priced to be budget-friendly. However, WD also has their 14TB Red Pro HDDs on sale, which are currently $264 off the MSRP... So I may go with that seeing the total price difference between 2 x 10TB HDDs and 2 x 14TB HDDs is currently only $157.50.

 

I know the IronWolf HDDs are priced higher than WD's pricing for Red Pro HDDs, but if there's really no remarkable difference to quantify spending more on IronWolf HDDs, I may as well stick to plan and go with Red Pro HDDs and save a few dollars in return.

All good options. Seagate vs WD is 100% personal preference and whichever one happens to be a better deal.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/8/2022 at 10:50 AM, dalekphalm said:

Backups are far more important than the specific model of HDD you buy. If the data is important, make sure there are backups (and RAID does not count as a backup).

 

Increasing the fault tolerance can also help (using RAIDZ2 instead of RAIDZ1 for example).

Yeah ....this. There is utterly no statistical evidence that specific fruit loop colors of WD or any other HD brand is more reliable than another aside from some stats from Backblaze. I stick to HGST.

 

SATA enterprise means it has a longer warranty....and you pay for it. The term SATA drive and enterprise is an oxymoron anyways. 

 

Keep your NAS on a UPS to guard against hard power shutdowns, watch SMART reports periodically, and do backups. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/8/2022 at 4:48 PM, dalekphalm said:

Red Pro's just have tighter QA and a slightly better meantime between failure

And all of them CMR, which cannot be said about the regular Red drives....
https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/internal-drives/wd-red-hdd/product-brief-western-digital-wd-red-hdd.pdf

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, dalekphalm said:

RAID1 (mirror) is totally fine - but as noted elsewhere, you cannot convert RAID1 into RAIDZ1 or RAIDZ2 after the fact, so there is no direct and easy expansion (there is a way, but it involves creating a new duplicate array and spanning the two arrays).

And in spanning the two RAID1 arrays, would it end up being one larger or two separate volumes?

Thanks to this thread, it has made me reassess my plans to include data parity. I've always been okay with RAID1, but do see how building a RAID-Z array for use with ZFS is the better route.

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, t3ch_n1nj4 said:

And in spanning the two RAID1 arrays, would it end up being one larger or two separate volumes?

Spanning two RAID1 arrays would produce one, larger, volume. This effectively becomes a RAID10 array.

59 minutes ago, t3ch_n1nj4 said:

Thanks to this thread, it has made me reassess my plans to include data parity. I've always been okay with RAID1, but do see how building a RAID-Z array for use with ZFS is the better route.

RAIDZ vs RAID1 (mirrored) is all about what kind of drive numbers and fault tolerance you want.

 

A mirror with 2 drives is fine if that suits your needs.

 

Eventually* there will be a possible method of expanding (easily) a RAIDZ1 or RAIDZ2 array (still in the works but it's been in development for a while) - but until then, any ZFS array is going to be a pain to grow.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×