Jump to content
bsodmike

FreeNAS Server Build

Recommended Posts

Posted · Original PosterOP

Hi all,

 

At the outset, I wanted to house this server in a proper rackmount case, rack etc.  However, iStarUSA was unable to fulfil my order given that they are refreshing their EX3M16 (well, EX) line.  B&H USA ended up cancelling a good part of my orders as a result.  While it was a set back in terms of time put in to planning components, matching 2U compatible PSUs with the enclosure, I've had to resort to taking a more DIY approach.

 

This build consists of an Intel Xeon E5-2620 v4, Asus X99-E WS/USB3.1 (SSI-CEB) mainboard, Crucial 32GB RDIMM ECC DDR4 RAM, Corsair AX860i PSU, and a Coolermaster Evo 212 cooler.

 

 

On the storage side, the first Raidz2 volume will be running 8x WD Red 4TB NAS drives for total accessible capacity of 68% usable space, ~19.9TB.  I spent a good while planning the cost vs. usable space aspect via this handy calculator before deciding on 6 vs 8 drives for the volume.  The WD Black drive pictured is for archival backups.

 

 

I plan to run the ZFS FreeNAS server along side my 2x Synology RAID setup (total 40TB of WD Red Pros), and may look to re-configure this down the road.  I will start by moving critical data into ZFS first before making the FreeNAS the 'canonical' store.  Will most likely run it in parallel for the first couple months before making any drastic decisions.

Link to post
Share on other sites
Posted · Original PosterOP

This is what the initial build looks like - GPU was thrown in just to access to UEFI.  This will be replaced with a GTX1050 for the long-term.  This side of the Corsair Air 740 is pretty clean, with most of the bulk of the cabling in the rear compartment.

 

 

How am I going to mount the 8-drives though?  Well, I'm working on a solution on that front and exploring 3D printing options at the moment.

Link to post
Share on other sites
Posted · Original PosterOP
8 hours ago, Mikensan said:

What case is that, are there any 5 1/4 bays?

 

It's a Corsair Air 740.  It has 2x at the bottom, but instead I'm looking to 3D print 2x4 stacked bays to fit in the same area.

 

I could also do the same and fit more drives over the PSU in the rear compartment if needed.

 

I find it rather shocking that there is a whole lot of wasted space in the Air 540/740 (I have both cases) simply due to poor design and a hell-bent focus on accommodating custom water-cooling loops.

Link to post
Share on other sites

Yea that is a really strange case honestly. They make external enclosures as well, but can be pricey. You could connect them either by eSATA or some expander cases by SAS SFF cables.

Link to post
Share on other sites
Posted · Original PosterOP

@Mikensan yeah, I'll ultimately switch to a rack mount setup. For the time being, I've just been test fitting a run of 3D printed spacers to allow me to stack the drives as needed.  This is my design,

 

DFQ87KVUwAE_Ck9.jpg

 

 

Link to post
Share on other sites
Posted · Original PosterOP

Initial test fit from yesterday; I noticed there was a lack of rigidity in this configuration, and therefore am getting a 'base plate' with grooves to essentially allow the drives to slot-in.  Will post progress pics later this week.

 

Oh, the new mounting plates will also have a better ventilated design.

 

20180856_1923889784557368_50408794846815

Link to post
Share on other sites

Could use the backplate as a template to cut holes in a sheet of metal. If it's thin enough even drill bits meant for wood will get through it. Very cool man, keep us (at least me) posted.

 

There are screw holes on the bottom of the drives too and maybe you could create L brackets for a shelf?

Link to post
Share on other sites

Looks good thus far and thanks for sharing!  I'm currently finalizing my parts list and deciding between FreeNAS and unRAID as well as to which CPU / motherboard combo to use, so your build peaked my curiosity.   Hope you don't mind a few questions...

 

What's your ultimate goal for the system? 

Planning on running any Jails, VMs, and or Plex?

Besides the drives, why did you choose the parts

Why did you choose FreeNAS?

 

Thanks in advance!

Link to post
Share on other sites
Posted · Original PosterOP
On 24/07/2017 at 9:06 PM, Mikensan said:

Could use the backplate as a template to cut holes in a sheet of metal. If it's thin enough even drill bits meant for wood will get through it. Very cool man, keep us (at least me) posted.

 

There are screw holes on the bottom of the drives too and maybe you could create L brackets for a shelf?

Aye, some good ideas there mate.

 

I need to add some rubberised isolation to the drives, but this was a first attempt at getting x4 drives in. Used my iPhone to take a snap at the back to ensure nothing was touching the Mainboard or any of the headers. There's about 1.5-2cm gap so it's pretty safe.

 

20398184_116775468968607_661927302958940

 

Once I got rest of the drives in,

 

20393783_109952109648900_621339581979872

Link to post
Share on other sites
Posted · Original PosterOP

For the 4x drives on the left, I realised they needed some further rigidity and therefore had another part fabricated; Think of this as a 'tray' with spacers.  The tray and grooves allow for the HDDs to be shifted into and out of the case (as needed).

 

20155958_10103419599933099_6908799133615

 

20258190_10103419599938089_3086107540422

Link to post
Share on other sites
Posted · Original PosterOP
On 02/08/2017 at 0:29 AM, luisvaz said:

Looks good thus far and thanks for sharing!  I'm currently finalizing my parts list and deciding between FreeNAS and unRAID as well as to which CPU / motherboard combo to use, so your build peaked my curiosity.   Hope you don't mind a few questions...

 

What's your ultimate goal for the system? 

Planning on running any Jails, VMs, and or Plex?

Besides the drives, why did you choose the parts

Why did you choose FreeNAS?

 

Thanks in advance!

Hi Luis,

 

Well, I wanted to stick to X99 as it is a mature platform.  Early on I decided ECC was a requirement that I was settling on as well, this lead me to going Xeon.

 

From this point, I came across the Asus X99 WS boards - they are on the more expensive side - sure, but they are a cross between the 'gamer/overclocker' aesthetic and server grade components.  I like the LED-error code read out, hardware start/reset switches, having a thunderbolt header etc.  These are 'additions' a cheaper Supermicro board would generally lack.

 

The I/O at the back on the Asus X99 WS board is great too, has BIOS flash-back/reset switch.  The Asus UEFI is fantastic - so that settled my choice there.  I chose one of the cheaper E5 Xeons, enough cores to allow running at least 1-2 VMs if needed.

 

I went with Crucial 32GB RDIMM RAM; crucial make quality RAM so no issues there.  Power supply, I went for a Corsair AX860i, the second unit I own.  I also run a HX1000i and a HX750i.

 

In my other Synology units I'm running WD Red Pros that have served me well over the last 2+ years, and I went with WD Reds as they were cheaper.

 

One point to note, I went with the Asus X99 WS/USB3.1 board vs the Asus X99 WS/10G board, where the latter has dual-10G NICs (both are Intel X550 grade NICs) but you loose the Thunderbolt header.  I wanted to give myself the option of having the FreeNAS box in another room, and run a Corning thunderbolt cable the way Linus did.

 

I can buy Intel X520 cards for 10G ethernet at around $200 per card; additional cost - sure, but hey at least I have the option of Thunderbolt (in the future).

 

That said, it is rather annoying that Asus doesn't have a WS board that has IPMI + USB3.1 + Thunderbolt + Dual X550 10G.  Come on Asus!! Seriously??

Link to post
Share on other sites

Thanks for the reply and I understand regarding some of the hardware decisions, referring to the cost factor of X99 over others, but why FreeNAS over unRAID?   Just trying to understand why you went with FreeNAS.   Curious about the thunderbolt cable as both are managed via a browser and other than initial setup and UEFI / BIOS updates, what is the need for thunderbolt and again, the added expense. 

Link to post
Share on other sites
On 21. 8. 2017 at 5:04 PM, bsodmike said:

One point to note, I went with the Asus X99 WS/USB3.1 board vs the Asus X99 WS/10G board, where the latter has dual-10G NICs (both are Intel X550 grade NICs) but you loose the Thunderbolt header.  I wanted to give myself the option of having the FreeNAS box in another room, and run a Corning thunderbolt cable the way Linus did.

 

I can buy Intel X520 cards for 10G ethernet at around $200 per card; additional cost - sure, but hey at least I have the option of Thunderbolt (in the future).

 

That said, it is rather annoying that Asus doesn't have a WS board that has IPMI + USB3.1 + Thunderbolt + Dual X550 10G.  Come on Asus!! Seriously??

You are right! :-) I have bought X99-E WS about half year ago and I'm very happy with that. I was decided to go with X99, but can't decide whether to go with X99-E WS or X99-E-10G WS ... for me I was very excited when 10G version was introduced, but then I was disappointed - complete lack of thunderbolt (even onboard Type-C is just USB 3.1), fewer ports on back IO and the last thing was that Asus (and their distributors) tells me (in January 2017) that this product isn't destined to be available in the Czech Republic (center of Europe...) and when it finally was available in ~May 2017, I already had my X99-E WS and that X99-E-10G WS was more expensive than to buy extra 10G card from ebay... (I already have a couple of them)

Link to post
Share on other sites
Posted · Original PosterOP
On 23/08/2017 at 7:57 PM, luisvaz said:

Thanks for the reply and I understand regarding some of the hardware decisions, referring to the cost factor of X99 over others, but why FreeNAS over unRAID?   Just trying to understand why you went with FreeNAS.   Curious about the thunderbolt cable as both are managed via a browser and other than initial setup and UEFI / BIOS updates, what is the need for thunderbolt and again, the added expense. 

Hi mate - well, I was keen in ZFS and that was the 'priority'; as far as operating systems go, FreeBSD has a track-record of being solid, considered to have a more robust network stack; it's Unix.  Had there been a linux-alternative, I may have gone with that.  For my professional work I mainly rely on Debian for production environments.

 

I didn't spend too much time unRAID; it certainly is there as an option.  I do plan to try my hand at setting up a EXSi lab at home, and virtualise other aspects of my lab-stuff.  I'd do it mostly out of curiosity, as it's typically cheaper to just go with instances in AWS for any real need, rather than paying EXSi licensing costs.

 

In any case, running VMs wasn't an initial priority - but I did pick the CPU/Mobo ensuring they would support it.

 

Thunderbolt - well, VNC is nice, running headless is great (99% of my systems are headless over SSH), but if you want to have a physical terminal, say many meters away, Thunderbolt is a real option.  Why? Cause PCs/Servers generate heat/noise.  I can move them to my server-rack space downstairs and have a near-latency free console over Thunderbolt if I so ever wanted it.

 

Do I need it? No.  But I have the option.  True, that also lets me manage UEFI stuff; an alternative would have been either (i) an IPMI chip on the board or (ii) an IP-based KVM solution.

Link to post
Share on other sites
Posted · Original PosterOP
On 24/08/2017 at 4:04 AM, iJarda said:

You are right! :-) I have bought X99-E WS about half year ago and I'm very happy with that. I was decided to go with X99, but can't decide whether to go with X99-E WS or X99-E-10G WS ... for me I was very excited when 10G version was introduced, but then I was disappointed - complete lack of thunderbolt (even onboard Type-C is just USB 3.1), fewer ports on back IO and the last thing was that Asus (and their distributors) tells me (in January 2017) that this product isn't destined to be available in the Czech Republic (center of Europe...) and when it finally was available in ~May 2017, I already had my X99-E WS and that X99-E-10G WS was more expensive than to buy extra 10G card from ebay... (I already have a couple of them)

Right - well consider if you can order from B&H USA.  That's where I get all my stuff.  I went with the X99-E WS as well, it was cheaper and I plan to buy Intel X520/550 10G cards and drop them in when my 10G switch is added to the network.  ~$200/card from NewEgg ain't too bad.

Link to post
Share on other sites
Posted · Original PosterOP
Just now, bsodmike said:

Right - well consider if you can order from B&H USA.  That's where I get all my stuff.  I went with the X99-E WS as well, it was cheaper and I plan to buy Intel X520/550 10G cards and drop them in when my 10G switch is added to the network.  ~$200/card from NewEgg ain't too bad.

That said, I wish Asus made a WS board with IPMI + all the USB3.1 and dual-10G NICs.

 

Seriously, their approach at making 'product' lines doesn't make much sense at times.  And to make matters worse, there's no compatible IPMI chip I could find that works with the X99-E WS/USB3.1.

Link to post
Share on other sites
6 hours ago, bsodmike said:

Right - well consider if you can order from B&H USA.  That's where I get all my stuff.  I went with the X99-E WS as well, it was cheaper and I plan to buy Intel X520/550 10G cards and drop them in when my 10G switch is added to the network.  ~$200/card from NewEgg ain't too bad.

That is one way, but here in The Czech Republic we have 2 year warranty (given by law) and on some products (which applies to X99-E WS) comes with 3 year warranty ... and shipping cost will be astronomic... (as I want to buy open compute server through ebay, but they ship only to US and Canada ... so I contacted them and shipping quote $500 for $350 product isn't great ... and they refuse to send it via ebay global shipping, which will be much cheaper...)

Link to post
Share on other sites
On 8/26/2017 at 1:06 AM, bsodmike said:

Hi mate - well, I was keen in ZFS and that was the 'priority'; as far as operating systems go, FreeBSD has a track-record of being solid, considered to have a more robust network stack; it's Unix.  Had there been a linux-alternative, I may have gone with that.  For my professional work I mainly rely on Debian for production environments.

 

I didn't spend too much time unRAID; it certainly is there as an option.  I do plan to try my hand at setting up a EXSi lab at home, and virtualise other aspects of my lab-stuff.  I'd do it mostly out of curiosity, as it's typically cheaper to just go with instances in AWS for any real need, rather than paying EXSi licensing costs.

 

In any case, running VMs wasn't an initial priority - but I did pick the CPU/Mobo ensuring they would support it.

 

Thunderbolt - well, VNC is nice, running headless is great (99% of my systems are headless over SSH), but if you want to have a physical terminal, say many meters away, Thunderbolt is a real option.  Why? Cause PCs/Servers generate heat/noise.  I can move them to my server-rack space downstairs and have a near-latency free console over Thunderbolt if I so ever wanted it.

 

Do I need it? No.  But I have the option.  True, that also lets me manage UEFI stuff; an alternative would have been either (i) an IPMI chip on the board or (ii) an IP-based KVM solution.

 

So your main goal was storage and obviously ZFS was your preferance... gotcha.   Totally understand on options and agreed, makes perfect sense to have it available in the event you wish to take advantage of it. 

 

Per chance, have you seen this drive cage?  Mounts onto 120mm fan holes and lets you keep the fan. 

 

http://www.caselabs-store.com/double-wide-magnum-standard-hdd-cage/

 

 

Link to post
Share on other sites
Posted · Original PosterOP
15 hours ago, luisvaz said:

 

So your main goal was storage and obviously ZFS was your preferance... gotcha.   Totally understand on options and agreed, makes perfect sense to have it available in the event you wish to take advantage of it. 

 

Per chance, have you seen this drive cage?  Mounts onto 120mm fan holes and lets you keep the fan. 

 

http://www.caselabs-store.com/double-wide-magnum-standard-hdd-cage/

 

 

That's pretty awesome; this could have also worked http://www.caselabs-store.com/hdd-cage-expansion-kit/

 

Unfortunately, I didn't want to spend as much time re-ordering stuff and went the 3D printing route.  Sure, the metal expansion cages are cheaper, but once I factor in the cost of shipping, it more or less works out the same as the overall 3D printing cost.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Recently Browsing   0 members

    No registered users viewing this page.


×