Jump to content

Mixing drives in raid

Troika

I know its not good practice to mix different drives into raid arrays but I'm having trouble finding more HGST Deskstar NAS 4TB drives for a reasonable price, I'm guessing that those aren't really being produced anymore considering the ones I found new on amazon sell for well over $200 USD and I bought two drives for $120 each about two years ago. What I'd like to do is add two drives and create a raid 5 or raid 10 array so I have some redundancy incase of something happening to one of the drives and to have a single accessible partition rather than multiple. The two I found for reasonable prices are the Seagate IronWolf 4TB (Model: ST4000VNA08) and the WD Red 4TB (Model: WDBMMA0040HNC-NRSN). I know that Seagate has had a checkered reliability history with their drives and I previously didn't consider using their drives for something like this but I know that Ironwolfs are higher end products than the typical Barracuda drive. The price difference between the two drives is $5 so I'll probably end up buying two of whichever is more reliable on average. The thing that I think will cause issues is the fact that my HGST drives are 7200rpm while the Seagates are 5900rpm and the WDs are 5400rpm. Is there a better alternative I can pair with my existing drives or should I just bite the bullet and buy 4 all new drives?

Link to comment
Share on other sites

Link to post
Share on other sites

Mixing drives of different brands or batches is actually a practice done in the enterprise to minimize the likelihood of catastrophic failure. So long as the disks are the same capacity & RPM you have nothing to worry about in that regard.

 

Reasons to not use slower drives is it would slow down the pools overall performance. Doing a RAID0 between a HDD & an SSD would only ~ double the performance of the HDD because it's the slowest disk in the array so if it can be helped match RPMs. Matching cache (64MB/128MB/etc) isn't as big a deal depending on your application.

 

IMO, if it's only a few disks and you're on a budget consider shucking drives like WD Elementals. If you want NAS/Server rated drives I can vouch for the Seagate Ironwolfs.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Windows7ge said:

Mixing drives of different brands or batches is actually a practice done in the enterprise

Must be a private sector thing. I've never seen this in the gov't.

NOTE: I no longer frequent this site. If you really need help, PM/DM me and my e.mail will alert me. 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Radium_Angel said:

Must be a private sector thing. I've never seen this in the gov't.

¯\_(ツ)_/¯ I'm working off of what I've been told. I don't have first hand experience here. If I could remember the sources where I was told this I'd link them.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/24/2020 at 4:53 PM, Windows7ge said:

IMO, if it's only a few disks and you're on a budget consider shucking drives like WD Elementals. If you want NAS/Server rated drives I can vouch for the Seagate Ironwolfs.

WD Elementals are external drives, aren't they? I dunno if I'd trust something like that in a raid array for my backups.

 

I'd like to use Ironwolfs but they're expensive for four of them and mixing them with my Deskstar NAS drives would make the whole array slower. Its kinda slow over the network already, I don't want to make it slower. I'll see if I can find some reliable alternatives that are also 4TB but run at 7200rpm.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/24/2020 at 4:53 PM, Windows7ge said:

Reasons to not use slower drives is it would slow down the pools overall performance. Doing a RAID0 between a HDD & an SSD would only ~ double the performance of the HDD because it's the slowest disk in the array so if it can be helped match RPMs. Matching cache (64MB/128MB/etc) isn't as big a deal depending on your application.

So if I threw an SSD into the array, basically using it as a caching device, it would make the whole array faster? Does the size and speed of the SSD matter in this case or is there a particular way you have to configure an array to properly use an SSD cache?

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Troika said:

WD Elementals are external drives, aren't they? I dunno if I'd trust something like that in a raid array for my backups.

 

I'd like to use Ironwolfs but they're expensive for four of them and mixing them with my Deskstar NAS drives would make the whole array slower. Its kinda slow over the network already, I don't want to make it slower. I'll see if I can find some reliable alternatives that are also 4TB but run at 7200rpm.

It's not a terrible idea for small NAS backups if you're on a budget. If you only have one copy of your data you may want more reliable drives but if you keep a regular eye on both your main and backup storage it doesn't matter as much if a disk fails in a backup server so long as the disk is replaced.

 

If I'm not mistaken the Ironwolfs I've bought were 7200RPM. I'm not sure which ones you're looking at. You should be able to get them as well in 4TB capacities.

 

9 minutes ago, Troika said:

So if I threw an SSD into the array, basically using it as a caching device, it would make the whole array faster? Does the size and speed of the SSD matter in this case or is there a particular way you have to configure an array to properly use an SSD cache?

This is dependent on what you're using to form the array. What are you using?

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Windows7ge said:

It's not a terrible idea for small NAS backups if you're on a budget. If you only have one copy of your data you may want more reliable drives but if you keep a regular eye on both your main and backup storage it doesn't matter as much if a disk fails in a backup server so long as the disk is replaced.

 

If I'm not mistaken the Ironwolfs I've bought were 7200RPM. I'm not sure which ones you're looking at. You should be able to get them as well in 4TB capacities.

 

This is dependent on what you're using to form the array. What are you using?

I'll have to check, the one I was looking at might have been a lower end model.

 

As for what I'm using, I was just planning on using the motherboard's built in raid controller, which is the AMD SB950 southbridge found on my Asus 990FX Sabertooth R2.0. Its probably super overkill for just a NAS but I can have it do other things and have Windows and Linux installed on a small SSD to manage the NAS and do some maintenance and drive checking with some tools I have. I do plan on building my own router at somepoint, probably using pfsense so it'd be convenient for the NAS drives to exist in the same system as the router. I'm definitely open to suggestions, I've never built my own NAS before so I'm not sure if a hardware raid is the best option. Write speed isn't something I care too much about for this but maximizing read speed would definitely be a nice plus. Data redundancy is the main thing. If a drive dies, I want to be able to swap it out easily. My best options would be raid 5 or raid 10, both are supported by the SB950 southbridge. With 4 drives at 4TB each, raid 5 gives me 12TB while raid 10 gives me only 8TB and a small bump to write speed. Both give boosts to read speed but raid 5 uses one drive for parity and error checking, at least from what I red. Both offer one drive failure worth of protection so the difference is 4TB more capacity, error checking and parity or a small bump in write speed. Raid 5 definitely sounds more attractive. The board also has a total of 6 sata ports that are managed by that controller, there's two more sata ports controlled by an Asmedia controller but don't support hardware raid.

Link to comment
Share on other sites

Link to post
Share on other sites

There's quite a bit to go over here.

  1. Don't use motherboard RAID. If you want to use hardware based RAID get a proper RAID card. Also motherboard RAID at least on an old chipset like that doesn't support cache disks though you might be able to set that up in software (your OS).
  2. For your pool I would recommend using ZFS or UNRAID instead of hardware RAID. ZFS doesn't support cache disks that will speed up the pool instead it uses RAM as a write cache. UNRAID doesn't actually RAID the drives it keeps full files on each disk but does support SSD caching so you can get quick write performance. The OS costs money though.
  3. RAID5 you can lose up to 1 disk per array. RAID10 you can lose up to 2, not 1 since you need a minimum of 4 disks and data is split then mirrored across them. Note though you can't lose any 2 of the 4. If you lose 2 of the same set of data all data is lost.
  4. You might like to checkout a hypervisor OS. Look-up PROXMOX. I do have to say I don't recommend virtualizing pfSense. When the server goes down for maintenance or a problem (which WILL happen) so will your router. Your whole house will lose Internet. pfSense is one of those appliances that really deserve a dedicated box.
Link to comment
Share on other sites

Link to post
Share on other sites

1. That makes sense. 990FX boards are seriously dated and SSDs were only just starting to become mainstream at that time.

 

2. I've heard of both through the Craft Computing discord server I'm in so I'll probably look into getting more info that way. I know Jeff has posted a few videos about both and a different NAS OS too.

 

3. So Raid 10 would offer more redundancy than Raid 5 but only if one drive per pair fails. If both drives in one pair fails, I'm SOL.

 

4. That makes sense, I can probably setup a small box, like a Intel NUC for the router box. I can get more info from Jeff's videos cause he's made videos on pfsense and PROXMOX.

Link to comment
Share on other sites

Link to post
Share on other sites

  1. The only motherboard supported caching I'm familiar with would be Optane and you need a much newer and (I believe) specifically Intel motherboard for it.
  2. UNRAID is more user friendly and easier to expand existing pools the only real con being the OS isn't free. ZFS is an enterprise grade file system with great features surrounding data integrity as its top priority but expanding a pool is a bit of a pain and generally requires more money up front.
  3. Yes. IMO RAID5/6 is your better option if you only need a gigabit connection to the NAS. RAID10 has other pros though such as higher IOPS. This is great for hypervisors running many simultaneous OS's.
  4. pfSense for a simple setup doesn't need much power. That'll work fine. The NUC will need at least 2 Ethernet ports though. 1 for the LAN & WAN (although technically with VLANs and a managed switch you could get away with just one).
Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Windows7ge said:
  1. The only motherboard supported caching I'm familiar with would be Optane and you need a much newer and (I believe) specifically Intel motherboard for it.
  2. UNRAID is more user friendly and easier to expand existing pools the only real con being the OS isn't free. ZFS is an enterprise grade file system with great features surrounding data integrity as its top priority but expanding a pool is a bit of a pain and generally requires more money up front.
  3. Yes. IMO RAID5/6 is your better option if you only need a gigabit connection to the NAS. RAID10 has other pros though such as higher IOPS. This is great for hypervisors running many simultaneous OS's.
  4. pfSense for a simple setup doesn't need much power. That'll work fine. The NUC will need at least 2 Ethernet ports though. 1 for the LAN & WAN (although technically with VLANs and a managed switch you could get away with just one).

1. Yeah, 6th gen Intel or newer if my memory is correct. Or an expensive LSI raid controller out of a server or enterprise machine.

 

2. I'll look into both and see what is the most econonical. Easiness in expanding certainly would be a big plus.

 

3. Yeah, gigabit (about 125MB/s of raw throughput) is more than plenty for moving files or loading videos directly from the NAS. 2.5 gigabit would be nice but I'd have to buy new switches and I don't really want to fork over for that.

 

4. If it doesn't have dual LAN, it probably has USB 3.0 or type C on it, at least newer NUCs do so getting a USB/type C to gigabit lan dongle isn't a big deal. Certainly more stable than wifi for the main connection. I'll be switching my current router, an Asus RT-AC87R, into access point mode for wifi passthrough for wireless devices. I have two five port gigabit switches I'll be using to connect most of the devices in the apartment. 3, ports would be perfect. One for each switch and one for the access point.

Link to comment
Share on other sites

Link to post
Share on other sites

For ZFS you can checkout FreeNAS. It's based off of FreeBSD. ZFS has its perks but yeah adding 1 disk to an existing pool at a time isn't an option (or at least it's highly not recommended).

 

On a 1Gbit network you likely won't need a cache drive at all. HDDs will be fast enough to saturate gigabit read/write.

 

IIRC pfSense is based on FreeBSD and NIC support is hit-or-miss. Any NIC made by realtek basically won't work and USB NICs I wouldn't get your hopes up. PCIe Intel branded NICs would be the way to go.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×