Jump to content

Looking for help with a FreeNAS Build for File storage #ThePylonNAS

There has already been some progress over on L1Tech but I thought I'd bring this project over here as well since all of my major builds in the past have been here. Some of you may in fact already know about my existing HTPC / Server build #ThePylon. Side note, how's everyone doing? 

 

https://forum.level1techs.com/t/looking-for-help-with-a-freenas-build-for-file-storage/136371

 

 

 

Initial post:

 

I am switching away from using my HTPC as a file server and am looking to leave it as an HTPC and building another PC to be be a File Server. I was thinking about using FreeNAS and ZFS. It may also host a VM or torrent client but this is not a primary concern.

 

Right now my current storage is 4x 8TB drives in RAID5 giving me ~24TB of storage. The plan was to build the new system and buy 4 more 8TB drives and get it setup. Transfer all the data from the one computer to the other, then pop the other 4 drives into the system and expand its storage to all 8 drives.

 

What im wondering is what is suitable hardware for this. Something low power is probably preferred. Is it possible to use something like one of the Ryzen APU’s for this task? Would 32GB of RAM be enough. I know I have heard 1GB of RAM per 1TB of storage but I dont know how much wiggle room there is.

 

For the 8 drives I know I would need a expansion card of some sort and that RAID cards are not preferred. I already have 2 of the red ASUS Aquantia 10G NIC’s available to link directly to other systems (one to the HTPC and one to my workstation).

 

Lastly I know SSD caching is an option would NVMe be best for this? I ask since I could have something small like a 128GB otherwise I have a number of 256GB SATA drives kicking around and could chuck a number of them in there on the chipset’s SATA controller.

 

Would it be wise to have some sort of redundancy for the boot drive?

 

Just wanted to say thanks in advance for the help. I know its a bombardment of questions. Its been a while since I have looked into a separate system for my server since I tried to only have one system on 24-7. I should be able to have the HTPC off when I dont need it now due to not needing to record TV.

 

If there is more info I can provide that you think would be helpful let me know.

 

 

Oh for reference the HTPC is a Sabertooth X79, i7-4820K, 32GB DDR3 (could upgrade to 64GB), GTX1060 FTW2 ICX

Link to comment
Share on other sites

Link to post
Share on other sites

More to come in the morning / once I'm awake. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, TheProfosist said:

I know I have heard 1GB of RAM per 1TB of storage but I dont know how much wiggle room there is.

That's a long lasting ever persisting myth/untruth. The reality is that is a lower point recommendation for when using deduplication, which is often not a good idea for ZFS from what I understand, and in general the amount of ram you need only depends on the performance you require and the type of load and amount you have requirement for. More ram is just more performance, you don't need to add huge amounts to get decent performance in a mostly single user scenario.

 

Remember if you have performance problems you can always buy more ram, un-buying is a lot harder.

 

4 hours ago, TheProfosist said:

Lastly I know SSD caching is an option would NVMe be best for this? I ask since I could have something small like a 128GB otherwise I have a number of 256GB SATA drives kicking around and could chuck a number of them in there on the chipset’s SATA controller.

Have a read of this before you go down the path of trying to use an SSD for caching in ZFS, it likely won't do what you think it will.

https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/

https://www.servethehome.com/what-is-the-zfs-zil-slog-and-what-makes-a-good-one/

 

4 hours ago, TheProfosist said:

For the 8 drives I know I would need a expansion card of some sort and that RAID cards are not preferred

There are lots of IBM M1015 RAID cards pre-flashed to IT mode on ebay that are excellent options, anything based on LSI 9211 card will work which those IBM M1015 are. I think Dell H200 is another option, others can confirm that. I've only used the IBM cards myself or straight LSI ones.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 months later...

just though id post an update on this it is still in progress but there was this huge to do with the company i bought the original memory from that wasnt compatible so i have been dealing with a paypal claim. I have now ordered the memory @mutation666 suggested from newegg. Had to order 2 different times because they limit to 2 sticks. First pair is in now, second is on the way.

I did get the system booted into windows 10 with some non ECC corsair ram i had around to test the other hardware and the 580.

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/5/2019 at 10:03 AM, leadeater said:

That's a long lasting ever persisting myth/untruth. The reality is that is a lower point recommendation for when using deduplication, which is often not a good idea for ZFS from what I understand, and in general the amount of ram you need only depends on the performance you require and the type of load and amount you have requirement for. More ram is just more performance, you don't need to add huge amounts to get decent performance in a mostly single user scenario.

 

Remember if you have performance problems you can always buy more ram, un-buying is a lot harder.

 

Have a read of this before you go down the path of trying to use an SSD for caching in ZFS, it likely won't do what you think it will.

https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/

https://www.servethehome.com/what-is-the-zfs-zil-slog-and-what-makes-a-good-one/

 

There are lots of IBM M1015 RAID cards pre-flashed to IT mode on ebay that are excellent options, anything based on LSI 9211 card will work which those IBM M1015 are. I think Dell H200 is another option, others can confirm that. I've only used the IBM cards myself or straight LSI ones.

either way the ram thing is semi solved I have 64GB 4x of Crucial CT16G4WFD8266 on the way. Prices really have plummeted since this project started.

 

likely wont be caching though will have a separate SSD array for the VM's and other things. If I want to use as a scratch disc RAID0 would be ideal but id need to look into a way to image it to the big array otherwise I might go RAID1 for a bit of redundcancy

 

Ended up with a 9300-8i HBA off of ebay that i snagged for $100 which tested good.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 months later...

I know im dredging this up but its still a work in progress. I just found out the the Aquantia 10G cards are not supported in FreeNAS also some have said that a different OS would be better since id like to possibly pass the GPU though and have multiple VM's. Would anyone have suggestions with/for this?

 

Ive seen many use unraid which seems to have a decent interface.

Link to comment
Share on other sites

Link to post
Share on other sites

For GPU pass through UNRAID is going to be the easiest. Honestly anything VM related I'd recommend UNRAID, I have NEVER had any luck running VMs on FreeNAS over the last 5-6 years. Both OS have their strengths and weaknesses, and this is why I run both. For my VMs I have a box setup running UNRAID, for all my storage I run FreNAS. I even mount all of my FreeNAS data to my UNRAID VMs when needed. This also allows me to backup important data from Freenas to UNRAID to have things in 2 places.

Link to comment
Share on other sites

Link to post
Share on other sites

Hey all,

 

This post definitely has a lot to digest so I'll take it one step at a time.

 

1 GB of TB for ZFS is in fact a myth and should always be backed up by your current workload.  There are workloads where that happens to be true, but not all workloads are the same.

 

In most scenarios it much preferable to get between 8 GB and 16 GB of RAM and have L2ARC with a 900p (you'll have to do some tuning but we believe in you).

 

You only add more ARC (RAM) or L2ARC (Optane) when you notice FreeNAS is paging to the Pool a lot.  

 

ZFS for the most part will in fact prefer more threads over single core IPC.  Network operations scale really well.

 

For 10 Gigabit.  Your answers are Chelsio, Solarflare, and Intel.  I have no idea why commodity vendors chose Aquantia, it probably has something to do with the price.

 

If you want comaptiblity with the Aquantia NIC and possibly better VM support then Proxmox VE 6.0 is your solution.  FreeNAS will be more stable and has a better ZFS implementation because it's based on FreeBSD but Proxmox has what will probably amount to an equally good implementation in your use case.

 

 

As for the last comment from dtmcanamara

17 hours ago, dtmcnamara said:

For GPU pass through UNRAID is going to be the easiest. Honestly anything VM related I'd recommend UNRAID, I have NEVER had any luck running VMs on FreeNAS over the last 5-6 years. Both OS have their strengths and weaknesses, and this is why I run both. For my VMs I have a box setup running UNRAID, for all my storage I run FreNAS. I even mount all of my FreeNAS data to my UNRAID VMs when needed. This also allows me to backup important data from Freenas to UNRAID to have things in 2 places.

 

The underlying Virtualization technologies could not be farther apart in FreeBSD and in Linux.  Comparing them is like comparing apples and oranges.  Also keep in mind that ZFS was made as a clear point and that a hyper-converged solution was being described.

 

UnRAID has a lot of subtle complexities like not supporting ZFS, iSCSi or having what I would determine through my use cases as poor NFS support.  These can all be pretty critical when talking about a file server.  I do appreciate your input on the subject and lightly touching on hyper-converged systems vs distributed systems.

 

If it's any help to you Wendell loves Proxmox and I personally think it's probably one of the best Hypervisors that more recently is able to compete with UnRAID's usability while still maintains a lot of the enterprisy feature sets that FreeNAS brings to the table.  Seriously though dump the Aquantia NIC and get a cheap Solarflare card.  If you need 10-BASE-T and not SFP+, you can get a Solarflare SFP+ card and then get a 10-BASE-T transceiver.  Solarflare,Chelsio, and Intel cards all support SR-IOV which at some point you will discover is the magic to getting VMs to not have terrible IO.  If you can't return your NIC for whatever reason, it is supported in the Linux Kernel and will of course be available in Proxmox VE 6.0 (which is secretly just Debian Buster with some Proxmox flavoring).

Link to comment
Share on other sites

Link to post
Share on other sites

Alright so I already have 64GB of ECC memory in the system. I already have 2x 10G Aquantia cards and the motherboard what  hosen so I could use both at x4. Most of this system was made out of parts I already had kicking around or could trade for etc. I can do an updated post with the exact hardware.

 

I have it temporarily built so I can do testing with software as I plan to heavily modify an Antec 900 to fit everything and have cable management. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×