Jump to content

NAS + Server Storage Setup

I'm a newbie setting out to build a homelab server, which would also run FreeNAS in a VM. It would likely also run other VMs/apps (now or later) for software development, data analysis, home automation and IoT, email server, web server etc. 

I've been trying to figure out the optimum storage setup for this, and it's causing me much confusion. I think it better first to define the storage setup I am thinking about, and from there gauge what is the best kind of storage I need, and how many instances. I'll try to explain this below, and appreciate your insights (I'm a newbie to this, and might get terms and concepts wrong. Bear with me). I'd appreciate your insights to help me make a qualified decision, and I can accordingly but the optimum parts. 

I am thinking of 3 separate storage pools at the moment:

  1. A Server OS Pool: This will hold the server image (and possibly also the different VM images and other core files). I am considering to do this using a mirrored pair of 120 GB Samsung 850 EVO solid-state drives (probably partitioned for main server vs. VM images).
  2. A FreeNAS Pool: This will be the dedicated storage given to FreeNAS VM (via PCI passthrough) - I intend to get a HBA, and connect the drives to FreeNAS via this. I've been reading up bits and pieces on how to configure this (RAIDZ, RAIDZ2, mirrored vdevs,...), and have little idea at the moment as to how this this setup will look. At the moment I do not much much data to store on FreeNAS, but I see that growing over time. So, I'd like a setup where I can add more storage easily down the road. Towards this, I wonder if I should get a HBA with external SAS ports rather than internal ones, and setup this HDD pool on some external hot-swappable/pluggable storage (but I think this might be overkill at the moment). Eventually, if I move the FreeNAS into a standalone bare metal setup, then it would be easy to have the external storage move with it.
  3. A Server Pool: This would be the storage for everything else running on my Linux server. Initially this would be a single storage setup, but as I might have specific apps/VMs moving away from development/playground stages to production stages, then I would likely need to create separate storage setups for these. I'm thinking of initially storing the HDDs for this inside my main case, and set this up as a RAID array (RAID5, RAID6, or RAID10 perhaps, but I need to do much more analysis here too). [Or can I perhaps store all this within the FreeNAS storage, with the caveat that I must ensure that FreeNAS is up and running before these are? If it's even possible for these apps to access the FreeNAS data]

Does this make some sense?
If the setup does, then I can venture into what HDDs I would need to buy for each.

Link to comment
Share on other sites

Link to post
Share on other sites

if you're running freenas or zfs at all, get ecc ram. It is very important for the zfs filesystem. I am running freenas as a VM on my ESXi box with PCI passthrough with two HBA's. You can either get an old enterprise HBA and flash it to IT mode, or there are some relatively cheap sata HBA's on amazon that play nicely with freenas. That's what I've done and I can send you the link to the HBA's I use if you are interested in going that route. If this system is going to be important to you, go with raidz2. Raidz1 is safe, but rebuilding the array after one drive crashes puts the other drives under a heavy load and can cause another drive to die, therefore with a raidz1, you lost two drives and your data is gone. Raidz2 you can lose two drives and still be fine. It's a bit safer, but it's a tradeoff you can decide to make or not, but either way, you need to have backups. I'm not going to go on some rant about backups, you can make backups based on how you see fit, just make sure you are aware that raid is NOT a backup. As far as drives go, I only use WD drives. Seagate has been extremely unreliable for me over the years and I've had good luck with WD. Not everyone has the same experience, but there is a reason so many people over on r/datahoarder love the WD Reds. WD Reds are my recommendation. Stay far away from the seagate archive drives unless you fully understand what they are designed for. 

 

one more thing, your storage setup looks pretty solid, but if you want, you can use a share from freenas to store vms on. That way they are stored on a zfs volume that has some redundancy.  Depending on your budget, I would start small since you said you don't have much data to store yet. Maybe get lie 6 1TB or 2TB drives and put them in raidz2. That would give either 4TB or 8TB of usable space right now, and you can upgrade to 4TB drives or higher later on. But in order to upgrade a raidz2 pool, you have to increase the capacity of every drive in the pool. So you would have to replace all 6 drives with 6 higher capacity ones. But what you can do is take the old drives and turn them into another freenas box for backups of the more important data from the main server

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, fanchazstic said:

if you're running freenas or zfs at all, get ecc ram. It is very important for the zfs filesystem. I am running freenas as a VM on my ESXi box with PCI passthrough with two HBA's. You can either get an old enterprise HBA and flash it to IT mode, or there are some relatively cheap sata HBA's on amazon that play nicely with freenas. That's what I've done and I can send you the link to the HBA's I use if you are interested in going that route. If this system is going to be important to you, go with raidz2. Raidz1 is safe, but rebuilding the array after one drive crashes puts the other drives under a heavy load and can cause another drive to die, therefore with a raidz1, you lost two drives and your data is gone. Raidz2 you can lose two drives and still be fine. It's a bit safer, but it's a tradeoff you can decide to make or not, but either way, you need to have backups. I'm not going to go on some rant about backups, you can make backups based on how you see fit, just make sure you are aware that raid is NOT a backup. As far as drives go, I only use WD drives. Seagate has been extremely unreliable for me over the years and I've had good luck with WD. Not everyone has the same experience, but there is a reason so many people over on r/datahoarder love the WD Reds. WD Reds are my recommendation. Stay far away from the seagate archive drives unless you fully understand what they are designed for. 

 

one more thing, your storage setup looks pretty solid, but if you want, you can use a share from freenas to store vms on. That way they are stored on a zfs volume that has some redundancy.  Depending on your budget, I would start small since you said you don't have much data to store yet. Maybe get lie 6 1TB or 2TB drives and put them in raidz2. That would give either 4TB or 8TB of usable space right now, and you can upgrade to 4TB drives or higher later on. But in order to upgrade a raidz2 pool, you have to increase the capacity of every drive in the pool. So you would have to replace all 6 drives with 6 higher capacity ones. But what you can do is take the old drives and turn them into another freenas box for backups of the more important data from the main server

Hey thanks - this is fantastic information for me to consider. Breaking the points down:

 

RAM - Indeed, it's ECC RAM all the way and have ordered registered ECC RAM.

 

HBAs - I'll be happy to know your suggestions. I am thinking of getting a single HBA at the moment. This one caught my eye (cheap and seems recommended) : LSI SAS9211-8I 8PORT.

 

HDDs: I have been considering the WD Reds or the HGST Deskstar. What do you think of the latter?

My struggle here at the moment is cost vs. number of drives. I am planning to first setup the Linux Server and then a litle while later (days? weeks?) to setup FreeNAS. So I can't use FreeNAS from get go to store VM's and other server data.  My thoughts have just been radically changed based on understand of reading this excellent post and presentation - Slideshow explaining VDev, zpool, ZIL and L2ARC for noobs!. While I was earlier thinking of having a zpool based on a vdev with 2x4TB HDDs, and later adding new HDDs or a new vdev, I now see the value of having 6 HDDs from the start for FreeNAS. I also understand I am going to have to spend a lot of time to get a proper system in place.

 

So my thoughts now are:

1. Setup a basic storage for the server. 2x4TB (with redundancy by RAID1 or mirroring).

2. Setup FreeNAS as a VM to learn, experiment, and play. Depending on my budget at the time I can do this with 4/6 HDDs in RaidZ2 (likely 6x2TB), and add new vdevs where needed. And backup a lot in case I make errors or the system crashes. At this time still keep the server and freenas pools separate.

3. Once I am confident with FreeNAS, I move it into production with a 6x4TB or 6x6TB setup.

4. Once the production FreeNAS is stable, I move my server data to FreeNAS. And then use these server HDDs for something else (if I'm not too old and feeble by then :|)

 

Sounds reasonable?

 

[And yes, I'm considering RAIDZ2 and separate backups too!]

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Tabby said:

Hey thanks - this is fantastic information for me to consider. Breaking the points down:

 

RAM - Indeed, it's ECC RAM all the way and have ordered registered ECC RAM.

 

HBAs - I'll be happy to know your suggestions. I am thinking of getting a single HBA at the moment. This one caught my eye (cheap and seems recommended) : LSI SAS9211-8I 8PORT.

 

HDDs: I have been considering the WD Reds or the HGST Deskstar. What do you think of the latter?

My struggle here at the moment is cost vs. number of drives. I am planning to first setup the Linux Server and then a litle while later (days? weeks?) to setup FreeNAS. So I can't use FreeNAS from get go to store VM's and other server data.  My thoughts have just been radically changed based on understand of reading this excellent post and presentation - Slideshow explaining VDev, zpool, ZIL and L2ARC for noobs!. While I was earlier thinking of having a zpool based on a vdev with 2x4TB HDDs, and later adding new HDDs or a new vdev, I now see the value of having 6 HDDs from the start for FreeNAS. I also understand I am going to have to spend a lot of time to get a proper system in place.

 

So my thoughts now are:

1. Setup a basic storage for the server. 2x4TB (with redundancy by RAID1 or mirroring).

2. Setup FreeNAS as a VM to learn, experiment, and play. Depending on my budget at the time I can do this with 4/6 HDDs in RaidZ2 (likely 6x2TB), and add new vdevs where needed. And backup a lot in case I make errors or the system crashes. At this time still keep the server and freenas pools separate.

3. Once I am confident with FreeNAS, I move it into production with a 6x4TB or 6x6TB setup.

4. Once the production FreeNAS is stable, I move my server data to FreeNAS. And then use these server HDDs for something else (if I'm not too old and feeble by then :|)

 

Sounds reasonable?

 

[And yes, I'm considering RAIDZ2 and separate backups too!]

HBA - I currently have two of these. Whatever you get, just make sure it plays nicely with freenas. Freenas generally has pretty good support for hardware, but HBAs especially freenas can be pretty picky about.

 

HDDs - I don't know much about the Deskstars and don't own any HGST drives in general, but I have heard good things about HGST failure rates. Just make sure that whatever you get, it is designed for use in a NAS/server with tons of vibration from other drives and has TLER. 

 

That really sounds like a solid plan in my opinion. I can't think of anything that should be done differently.

 

One last thing, get a UPS if you don't have one. This is something that should go without saying, but especially for zfs, it is a must. Get one that can power the server long enough to shut down all the VMs with sufficient time left over to shut down the host OS. I have a CyberPower 1000VA pure sine wave that powers ONLY my server and networking gear. My desktop is on it's own separate UPS. Currently it estimates 17 minutes of runtime, which is way more than enough time to shut down the server properly. I like the CyberPower units because they are reasonably priced, replacement batteries are easy to find, and their PowerPanel Business Edition works with the consumer UPS models for shutting down ESXi (and it's free). But if you're just running linux, I'm sure there is support from most UPS models

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×