Jump to content

First Time NAS/Homelab Build - Looking for input on parts list and best ways to buy used server hardware

Hey folks, I recently jumped into the homelab/NAS scene and believe I’ve made decent progress on a potential NAS build, but my head is still spinning with all the options and variations in this world, so I wanted to get input where I am to help me dial in my choices and expectations. 


For OS, I’m planning to go with Truenas Scale, and the build is tuned with that in mind, though I have recently been considering UnRaid as potentially a better fit. 


Constraints:

  • Power: Main constraint, want this as low as feasible without jumping to SSDs (no power hungry servers basically)
  • Noise: minimal but not critical (the rack is in a corner of the basement, so it can make some noise, but preferably not so much that it is annoyingly loud from the other side of the basement. Some sound dampening could be added, but a quieter base is best)
  • Budget: $1000-$4000 (not budget constrained, want to build something that works well and reliably for the next 5-10 years, but not wildly or unnecessarily expensive)
  • Size: 2U-4U (have a massive rack to fill, so a 4U might be preferable from a noise perspective)

Use cases:

  • NAS file backup
  • Private Cloud server (i.e. Wireguard for remote access to NAS)
  • Music server (Samba, maybe Plex Amp or something similar)
  • Home assistant
  • Pi hole
  • Optional: Minecraft server, Plex/Jellyfin Media Server, VPN/Proxy

Current hardware selection (can definitely change):

  • CPU: Intel Xeon E series (E-2100, E-2200, E-2300) - Input needed! More info below
  • Motherboard: Appropriate SuperMicro for the CPU  (X11, X12)
  • RAM: 64 GB ECC DDR4
  • Boot drive: 120 GB M.2 NVMe SSD (probably a Samsung evo or something similar) 
  • HDDs: 6-8 Seagate Exos Drives,14-18 TB depending on cost 
  • Chassis: Supermicro chassis (of the 2U flavor probably), or an inexpensive 4U chassis like the Rosewill RSV-L4500U 4U, recommendations here are very much appreciated!
  • NIC: Some 10 gb NIC, have to research if Chelsio or Intel is better with Truenas Scale

 

The CPU has been the hardest choice, as the options and generations are vast and sprawling. At first I was considering the intel atom C3000 line, but since Minecraft is heavily single threaded, I moved toward the newer Intel Xeon E series. AMD was certainly overlooked in my process, as my main sources (ServeTheHome and the Truenas Community Hardware Guide) focused mostly on Intel platforms. If there is an AMD platform that would be killer for my application here I’d very much be open to it (heard some rumbling about ASRock rack X570 platforms, but I’ve just managed to get my mind wrapped around intel so haven’t checked them out yet).

 

This is where my consumer PC building naivety shows; I assumed that once I parted out the build it would be somewhat straightforward to buy the parts on Newegg, but I see that’s not really the case. In folks' experience, is it better to buy a used pre-built server or buy the used parts individually in this market? I’d personally like to buy the motherboard and HDDs new for reliability, and get the CPU used, but I’m not sure if this is a viable strategy.


Thanks in advance for any advice and input!

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Quantum Noisemaker said:

Hey folks, I recently jumped into the homelab/NAS scene and believe I’ve made decent progress on a potential NAS build, but my head is still spinning with all the options and variations in this world, so I wanted to get input where I am to help me dial in my choices and expectations. 


For OS, I’m planning to go with Truenas Scale, and the build is tuned with that in mind, though I have recently been considering UnRaid as potentially a better fit. 


Constraints:

  • Power: Main constraint, want this as low as feasible without jumping to SSDs (no power hungry servers basically)
  • Noise: minimal but not critical (the rack is in a corner of the basement, so it can make some noise, but preferably not so much that it is annoyingly loud from the other side of the basement. Some sound dampening could be added, but a quieter base is best)
  • Budget: $1000-$4000 (not budget constrained, want to build something that works well and reliably for the next 5-10 years, but not wildly or unnecessarily expensive)
  • Size: 2U-4U (have a massive rack to fill, so a 4U might be preferable from a noise perspective)

Use cases:

  • NAS file backup
  • Private Cloud server (i.e. Wireguard for remote access to NAS)
  • Music server (Samba, maybe Plex Amp or something similar)
  • Home assistant
  • Pi hole
  • Optional: Minecraft server, Plex/Jellyfin Media Server, VPN/Proxy

Current hardware selection (can definitely change):

  • CPU: Intel Xeon E series (E-2100, E-2200, E-2300) - Input needed! More info below
  • Motherboard: Appropriate SuperMicro for the CPU  (X11, X12)
  • RAM: 64 GB ECC DDR4
  • Boot drive: 120 GB M.2 NVMe SSD (probably a Samsung evo or something similar) 
  • HDDs: 6-8 Seagate Exos Drives,14-18 TB depending on cost 
  • Chassis: Supermicro chassis (of the 2U flavor probably), or an inexpensive 4U chassis like the Rosewill RSV-L4500U 4U, recommendations here are very much appreciated!
  • NIC: Some 10 gb NIC, have to research if Chelsio or Intel is better with Truenas Scale

 

The CPU has been the hardest choice, as the options and generations are vast and sprawling. At first I was considering the intel atom C3000 line, but since Minecraft is heavily single threaded, I moved toward the newer Intel Xeon E series. AMD was certainly overlooked in my process, as my main sources (ServeTheHome and the Truenas Community Hardware Guide) focused mostly on Intel platforms. If there is an AMD platform that would be killer for my application here I’d very much be open to it (heard some rumbling about ASRock rack X570 platforms, but I’ve just managed to get my mind wrapped around intel so haven’t checked them out yet).

 

This is where my consumer PC building naivety shows; I assumed that once I parted out the build it would be somewhat straightforward to buy the parts on Newegg, but I see that’s not really the case. In folks' experience, is it better to buy a used pre-built server or buy the used parts individually in this market? I’d personally like to buy the motherboard and HDDs new for reliability, and get the CPU used, but I’m not sure if this is a viable strategy.


Thanks in advance for any advice and input!

So... there is a lot to unpack here, and the answer I would recommend may seem a little scary at first as its a bit more involved - but you seem like the type to be ok with that, so here we go.

 

Truenas SCALE while supporting VM's well as its based on debian and KVM, is at heart, a storage appliance. It really likes and is really good at that, and its, well, not quite as good at anything else. The Scale builds are meant to provide more in the way of virtualization, but.... Why do you want to go with truenas for storage? Likely because its REALLY good at doing storage. Why would you want to go with TruenNAS for your hypervisor... likely because your unsure of what other options exist. 

 

Let me present you with Proxmox. Also debian based, but it is a hypervisor, and only a hypervisor (sort of.... its debian, so it can be whatever you want, people set up ZFS arrays directly in proxmox and share them via SMB, which is viable, BUT, you would be doing everything via CLI and that is imo more work then its worth when you can just virtualize truenas). Alas, we have arrived at the option! Build a proxmox host, and virtualize everything you need, including truenas, under it.

 

This would mean you can run whatever you want on this machine as a VM. Windows, Linux debian, other more different versions of linux (unix), and it will all be running under proxmox which is a free to use commercial grade hypervisor host (much like truenas is a storage appliance, this is the hypervisor equivalent). vmWare ESXi does exist as well (I previously used that), but it isn't free, and the free version has *many* limitations. XCP-ng also exists, and that could be worth looking into, but I decided on proxmox (made the switch from esxi a few months ago) and have never been happier.

 

Under this paradigm, you run everything next to truenas instead of under it. I have plex, home assistant, a bunch of ubuntu VM's some with docker containers within them, Windows, game servers, unifi controller, you name it, and they are all very happy and humming along. In this situation, truenas would operate only as the storage appliance, and all VM's would connect to its data via SMB or NFS, just like your would from your PC or laptop... but its all virtual, and virtual networking is extremely fast. The virtual NIC (virtIO), from my understanding, literally just swaps RAM from VM to VM, so while they show up as 10 gigabit when you would do say a right click properties in a windows VM, they can operate more at the speed of the CPU - basically, you don't need to worry about networking being your limitation, your truenas array will 100% be the limiting factor. So you have say an ubuntu VM running plex in it, it has a network location mounted to say /mnt/allmymoviesandstuff, and you point plex at that, and boom, done, movies are now in plex. 

 

Regardless of which route you take, you will want a large SSD boot drive. You don't want to run VM's off of the ZFS array, that would be horribly slow. If you go with proxmox, you will want to use the boot SSD to also boot the VM's off of, which is standard practice. If you use truenas, the same would apply (I think, I have not used scale yet, but I can't imagine they wouldn't let you instal VM's to the boot media). I have a 512 nvme SSD for my proxmox boot, and I have all my VM's running on that so they are nice and fast and snappy. I then use proxmox's free backup OS (also virtualized under proxmox VE (virtual environment, the actual hypervisor I have been referring to this entire time is officially named Proxmox VE...)) which is a bit of VMception as I am backing up my proxmox VE VM's, via a VM under proxmox... which is NFS mounted to a truenas store which is also..... a VM.... under proxmox. lol. But this is "fine" for homelab use. My backups are more of a "ah shit I goofed that VM up, let me go load a backup real quick" vs a catastrophe recovery enterprise solution. That said, if things ever went really sideways, I (and you would be able to) just boot truenas on bare metal (it doens't know its virutalized, well technically it actually does, but, it doesn't "care") and when you boot it baremetal, you could then install proxmox backup on another machine, connect via NFS, and pull the VM's off - it would be a pain, but again, this is homelab, not 99.9999% uptime enterprise land.

 

So, with all of that, hopefully you have enough info to go and try and learn what is best for you. There is a lot to unpack here, but it would be worth looking into this as its hard to pivot once you have things set up. Also of note, if you do want to virtualize truenas, you need a SAS HBA, they are ~40 bucks used on ebay, make sure they are flashed to IT mode (if not, you can do it yourself, it can be a pain tho), and some SAS to SATA cables for 20 bucks. This way, you can pass the PCIe HBA through proxmox to truenas so truenas can have direct bare metal access to the drives (this is required for ZFS to function correctly), and this is what would allow you in the future if ever needed to boot truenas bare metal. Just plug the drives into said HBA, boot truenas on the machine, load your most recent backup of your truenas (its just a little XML file, keep encrypted backups handy, they are like a few MB each....) and it would function 100% as it did while it was virtualized, assuming the ZFS array is not damaged either physically or logically. 

 

Link to a LSI HBA (this is what you want), already flashed to IT mode (this makes it not work as a RAID card, which is what you want, you DON'T want to put a RAID card between ZFS and drives, you want it to only act as an HBA or host bus adapter), and comes with some SAS to SATA cables: https://www.ebay.com/itm/194910024856?_trkparms=amclksrc%3DITM%26aid%3D111001%26algo%3DREC.SEED%26ao%3D1%26asc%3D20160908105057%26meid%3D1d6db6d7eb834263a2969c5688041447%26pid%3D100675%26rk%3D1%26rkt%3D15%26sd%3D194910024856%26itm%3D194910024856%26pmt%3D1%26noa%3D1%26pg%3D2380057&_trksid=p2380057.c100675.m4236&_trkparms=pageci%3Ab52e4c0c-f976-11ec-af16-e682f3aa4f29|parentrq%3Abb4e46951810a9f3e76d7c28fffd85b4|iid%3A1

 

Happy homelabbing!

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LIGISTX said:

So... there is a lot to unpack here, and the answer I would recommend may seem a little scary at first as its a bit more involved - but you seem like the type to be ok with that, so here we go.

 

Truenas SCALE while supporting VM's well as its based on debian and KVM, is at heart, a storage appliance. It really likes and is really good at that, and its, well, not quite as good at anything else. The Scale builds are meant to provide more in the way of virtualization, but.... Why do you want to go with truenas for storage? Likely because its REALLY good at doing storage. Why would you want to go with TruenNAS for your hypervisor... likely because your unsure of what other options exist. 

 

Let me present you with Proxmox. Also debian based, but it is a hypervisor, and only a hypervisor (sort of.... its debian, so it can be whatever you want, people set up ZFS arrays directly in proxmox and share them via SMB, which is viable, BUT, you would be doing everything via CLI and that is imo more work then its worth when you can just virtualize truenas). Alas, we have arrived at the option! Build a proxmox host, and virtualize everything you need, including truenas, under it.

 

This would mean you can run whatever you want on this machine as a VM. Windows, Linux debian, other more different versions of linux (unix), and it will all be running under proxmox which is a free to use commercial grade hypervisor host (much like truenas is a storage appliance, this is the hypervisor equivalent). vmWare ESXi does exist as well (I previously used that), but it isn't free, and the free version has *many* limitations. XCP-ng also exists, and that could be worth looking into, but I decided on proxmox (made the switch from esxi a few months ago) and have never been happier.

 

Under this paradigm, you run everything next to truenas instead of under it. I have plex, home assistant, a bunch of ubuntu VM's some with docker containers within them, Windows, game servers, unifi controller, you name it, and they are all very happy and humming along. In this situation, truenas would operate only as the storage appliance, and all VM's would connect to its data via SMB or NFS, just like your would from your PC or laptop... but its all virtual, and virtual networking is extremely fast. The virtual NIC (virtIO), from my understanding, literally just swaps RAM from VM to VM, so while they show up as 10 gigabit when you would do say a right click properties in a windows VM, they can operate more at the speed of the CPU - basically, you don't need to worry about networking being your limitation, your truenas array will 100% be the limiting factor. So you have say an ubuntu VM running plex in it, it has a network location mounted to say /mnt/allmymoviesandstuff, and you point plex at that, and boom, done, movies are now in plex. 

 

Regardless of which route you take, you will want a large SSD boot drive. You don't want to run VM's off of the ZFS array, that would be horribly slow. If you go with proxmox, you will want to use the boot SSD to also boot the VM's off of, which is standard practice. If you use truenas, the same would apply (I think, I have not used scale yet, but I can't imagine they wouldn't let you instal VM's to the boot media). I have a 512 nvme SSD for my proxmox boot, and I have all my VM's running on that so they are nice and fast and snappy. I then use proxmox's free backup OS (also virtualized under proxmox VE (virtual environment, the actual hypervisor I have been referring to this entire time is officially named Proxmox VE...)) which is a bit of VMception as I am backing up my proxmox VE VM's, via a VM under proxmox... which is NFS mounted to a truenas store which is also..... a VM.... under proxmox. lol. But this is "fine" for homelab use. My backups are more of a "ah shit I goofed that VM up, let me go load a backup real quick" vs a catastrophe recovery enterprise solution. That said, if things ever went really sideways, I (and you would be able to) just boot truenas on bare metal (it doens't know its virutalized, well technically it actually does, but, it doesn't "care") and when you boot it baremetal, you could then install proxmox backup on another machine, connect via NFS, and pull the VM's off - it would be a pain, but again, this is homelab, not 99.9999% uptime enterprise land.

 

So, with all of that, hopefully you have enough info to go and try and learn what is best for you. There is a lot to unpack here, but it would be worth looking into this as its hard to pivot once you have things set up. Also of note, if you do want to virtualize truenas, you need a SAS HBA, they are ~40 bucks used on ebay, make sure they are flashed to IT mode (if not, you can do it yourself, it can be a pain tho), and some SAS to SATA cables for 20 bucks. This way, you can pass the PCIe HBA through proxmox to truenas so truenas can have direct bare metal access to the drives (this is required for ZFS to function correctly), and this is what would allow you in the future if ever needed to boot truenas bare metal. Just plug the drives into said HBA, boot truenas on the machine, load your most recent backup of your truenas (its just a little XML file, keep encrypted backups handy, they are like a few MB each....) and it would function 100% as it did while it was virtualized, assuming the ZFS array is not damaged either physically or logically. 

 

Link to a LSI HBA (this is what you want), already flashed to IT mode (this makes it not work as a RAID card, which is what you want, you DON'T want to put a RAID card between ZFS and drives, you want it to only act as an HBA or host bus adapter), and comes with some SAS to SATA cables: https://www.ebay.com/itm/194910024856?_trkparms=amclksrc%3DITM%26aid%3D111001%26algo%3DREC.SEED%26ao%3D1%26asc%3D20160908105057%26meid%3D1d6db6d7eb834263a2969c5688041447%26pid%3D100675%26rk%3D1%26rkt%3D15%26sd%3D194910024856%26itm%3D194910024856%26pmt%3D1%26noa%3D1%26pg%3D2380057&_trksid=p2380057.c100675.m4236&_trkparms=pageci%3Ab52e4c0c-f976-11ec-af16-e682f3aa4f29|parentrq%3Abb4e46951810a9f3e76d7c28fffd85b4|iid%3A1

 

Happy homelabbing!

Oh man don't start tempting me! To try to keep my original post somewhat brief (lol), I didn't include any details as to why I ended up leaning towards Truenas Scale; I'm building this system to install in my dad's house and be used primarily by him. I'm going to be using it onsite and remotely, but it is ultimately a build for him to use. He is technologically literate and pretty good with computers (still remembers some of his Unix CLI commands from working with old school oracle databases), but isn't a pro networker/IT specialist. That's why I ultimately shied away from Proxmox and other virtualized solutions; I wanted it to be something he'd be somewhat comfortable using and managing without having to phone me every time, and adding in all those virtual layers of complexity would not help with that task.

 

That's why I ultimately settled on running Truenas Scale bare metal. Not super obtuse and has a nice GUI, and also has the ability to run Docker containers with Debian Linux being under the hood. Trying my best of striking the right balance between some real homelabbing experience for me and a solid and understandable NAS system for my dad, giving the latter precedence of course. Ultimately this system will be a NAS first, with a few docker containers for the above mentioned tasks and a few more for experimentation and fun. A "get your feet wet" system, if you will.

 

When I build my own dedicated homelab server, I'll most likely be going with Proxmox. Nice to know that Truenas runs super well over it, was a bit spooked about running a virtualized NAS, but seeing how easy it is to convert to bare metal if something goes wrong gives me a lot of confidence to try it someday. Thanks for illuminating all of that (and saved for reference)!

 

Now for the stupid question of the day:

"and some SAS to SATA cables for 20 bucks"

I was thinking of buying the EXOS drives with a SAS interface instead of SATA to match whatever SAS card or Supermicro board I get (hopefully I won't need a card with only 6-8 drives). In this case I'd just need SAS to SAS cables right? Or am I making an amateur mistake here in buying SAS interface drives.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Quantum Noisemaker said:

Oh man don't start tempting me! To try to keep my original post somewhat brief (lol), I didn't include any details as to why I ended up leaning towards Truenas Scale; I'm building this system to install in my dad's house and be used primarily by him. I'm going to be using it onsite and remotely, but it is ultimately a build for him to use. He is technologically literate and pretty good with computers (still remembers some of his Unix CLI commands from working with old school oracle databases), but isn't a pro networker/IT specialist. That's why I ultimately shied away from Proxmox and other virtualized solutions; I wanted it to be something he'd be somewhat comfortable using and managing without having to phone me every time, and adding in all those virtual layers of complexity would not help with that task.

 

That's why I ultimately settled on running Truenas Scale bare metal. Not super obtuse and has a nice GUI, and also has the ability to run Docker containers with Debian Linux being under the hood. Trying my best of striking the right balance between some real homelabbing experience for me and a solid and understandable NAS system for my dad, giving the latter precedence of course. Ultimately this system will be a NAS first, with a few docker containers for the above mentioned tasks and a few more for experimentation and fun. A "get your feet wet" system, if you will.

 

When I build my own dedicated homelab server, I'll most likely be going with Proxmox. Nice to know that Truenas runs super well over it, was a bit spooked about running a virtualized NAS, but seeing how easy it is to convert to bare metal if something goes wrong gives me a lot of confidence to try it someday. Thanks for illuminating all of that (and saved for reference)!

 

Now for the stupid question of the day:

"and some SAS to SATA cables for 20 bucks"

I was thinking of buying the EXOS drives with a SAS interface instead of SATA to match whatever SAS card or Supermicro board I get (hopefully I won't need a card with only 6-8 drives). In this case I'd just need SAS to SAS cables right? Or am I making an amateur mistake here in buying SAS interface drives.

Gotcha, those are good reasons...

 

For the drives, I would just get SATA drives unless you for some reason can get SAS ones for a cheaper price. SATA is much more ubiquitous and can be plugged into anything. SATA drives are a commodity item and are relatively inexpensive, and will be perfectly fine for your needs. All SAS ports (as far as I know?) can be broken out into SATA, so whatever you go with, you should be able to break the SAS port out into multiple SATA ports, but I suppose it would be a good idea to specifically confirm that with whatever mobo/HBA you end up using.

 

To bring this back into the answering your specific questions, you likely don't need to go too balls deep to get good performance; my previous homelab ran on a i3 6100 (4 threads.....) and 28 GB of ECC. That system supported a 10x4TB Z2 array on truenas which was given 16GB of RAM and was totally happy saturating a gigabit network, windows LTCS, home assistant, plex server, few other ubuntu VM's doing various tasks - mostly remedial things, but still, all under ESXi. Granted you will have more storage, BUT, realistically you don't need as much RAM as you think you need, and you almost certainly don't need as much CPU as you think you need; my i3 averaged 20% usage, and was able to do multiple 1080p to 720p transcodes at once without any of the VM's becoming unhappy. This was built on a HPE server/workstation mobo that did officially accept i3's (I got the CPU, case, PSU I will never use, and mobo all from HPE on sale for 250 bucks brand new about 6 years ago, what a great deal that was).

 

All of that to say, you likely can spend a good chunk less on the mobo and CPU side of things. I have gone the used server parts route, there are MANY options for this and will save you some money. Supermicro boards last a very long time, my buddy just upgraded to the same parts I have (we both upgraded at the same time to end up with mathicng systems), and he was previously running dual socket 1366 xeon's... those are like 13 years old at this point, with horrible cooling since he pulled the stock fans out and didn't properly address ducting (things got really hot, I am honestly shocked nothing died), and it kept on chuggin. Certainly noting wrong with buying new hardware, but if you can find some used stuff that fits the bill, may be worth considering if for no other reason to help reduce/reuse/recycle. You can find registered 2133 ECC RAM for pretty cheap, I thought I got my sticks on ebay for 30 bucks a stick, 16 GB each? CPU was 75 bucks iirc, mobo was more at ~230. It is a few generations old, but its still a solid performer and more then I need, in hindsight I should have gone with a less core count option just to save a bit on electricity.

 

Hope this helps... 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, LIGISTX said:

For the drives, I would just get SATA drives unless you for some reason can get SAS ones for a cheaper price. SATA is much more ubiquitous and can be plugged into anything. SATA drives are a commodity item and are relatively inexpensive, and will be perfectly fine for your needs. All SAS ports (as far as I know?) can be broken out into SATA, so whatever you go with, you should be able to break the SAS port out into multiple SATA ports, but I suppose it would be a good idea to specifically confirm that with whatever mobo/HBA you end up using.

Great, that helps clear that up. I saw that SATA and SAS were electrically compatible, so it's nice to know I can just go with SATA drives for simplicity and flexibility.

18 hours ago, LIGISTX said:

To bring this back into the answering your specific questions, you likely don't need to go too balls deep to get good performance; my previous homelab ran on a i3 6100 (4 threads.....) and 28 GB of ECC. That system supported a 10x4TB Z2 array on truenas which was given 16GB of RAM and was totally happy saturating a gigabit network, windows LTCS, home assistant, plex server, few other ubuntu VM's doing various tasks - mostly remedial things, but still, all under ESXi. Granted you will have more storage, BUT, realistically you don't need as much RAM as you think you need, and you almost certainly don't need as much CPU as you think you need; my i3 averaged 20% usage, and was able to do multiple 1080p to 720p transcodes at once without any of the VM's becoming unhappy. This was built on a HPE server/workstation mobo that did officially accept i3's (I got the CPU, case, PSU I will never use, and mobo all from HPE on sale for 250 bucks brand new about 6 years ago, what a great deal that was).

 

All of that to say, you likely can spend a good chunk less on the mobo and CPU side of things. I have gone the used server parts route, there are MANY options for this and will save you some money. Supermicro boards last a very long time, my buddy just upgraded to the same parts I have (we both upgraded at the same time to end up with mathicng systems), and he was previously running dual socket 1366 xeon's... those are like 13 years old at this point, with horrible cooling since he pulled the stock fans out and didn't properly address ducting (things got really hot, I am honestly shocked nothing died), and it kept on chuggin. Certainly noting wrong with buying new hardware, but if you can find some used stuff that fits the bill, may be worth considering if for no other reason to help reduce/reuse/recycle. You can find registered 2133 ECC RAM for pretty cheap, I thought I got my sticks on ebay for 30 bucks a stick, 16 GB each? CPU was 75 bucks iirc, mobo was more at ~230. It is a few generations old, but its still a solid performer and more then I need, in hindsight I should have gone with a less core count option just to save a bit on electricity.

Super helpful! Thanks for the details. Admittedly when I started doing to research to build the NAS I just looked at the newest stuff and focused on wrapping my mind around that (i3s, atoms, xeons, ryzen's, threadrippers lol). Looking at used parts adds a whole new dimension (e.g. trying to figure out intel's Xeon naming scheme and cross compare processors), so I put the used market aside initially, but it looks like I should definitely look into it more. Any tips for shopping for used servers? 

 

I suppose my biggest initial concern about buying a used servers was that I know they are known for being fairly loud and power hungry. That being said, I imagine I could take the guts out of a new server and place it into the Rosewill RSV-L4500U 4U case I mentioned before. Good to know that the Supermicro boards last a long time, as I wasn't sure if that was the case. 

 

Also, is there any benefit to newer hardware for power usage? My understanding is though TDPs across generations are the same, the newer hardware can chug through compute tasks quicker, and thus return to idle sooner, and therefore are more power efficient (sounds a bit speculative though). Does newer hardware have better idle power draw numbers, or are they about the same here as well?

 

In your opinion, what would be the best new CPU for what I'm doing here, and then what older chip might be a better bang for my buck value? (this would help me focus my decision making and board hunting, as I'm still a bit all over the place).

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Quantum Noisemaker said:

Great, that helps clear that up. I saw that SATA and SAS were electrically compatible, so it's nice to know I can just go with SATA drives for simplicity and flexibility.

Super helpful! Thanks for the details. Admittedly when I started doing to research to build the NAS I just looked at the newest stuff and focused on wrapping my mind around that (i3s, atoms, xeons, ryzen's, threadrippers lol). Looking at used parts adds a whole new dimension (e.g. trying to figure out intel's Xeon naming scheme and cross compare processors), so I put the used market aside initially, but it looks like I should definitely look into it more. Any tips for shopping for used servers? 

 

I suppose my biggest initial concern about buying a used servers was that I know they are known for being fairly loud and power hungry. That being said, I imagine I could take the guts out of a new server and place it into the Rosewill RSV-L4500U 4U case I mentioned before. Good to know that the Supermicro boards last a long time, as I wasn't sure if that was the case. 

 

Also, is there any benefit to newer hardware for power usage? My understanding is though TDPs across generations are the same, the newer hardware can chug through compute tasks quicker, and thus return to idle sooner, and therefore are more power efficient (sounds a bit speculative though). Does newer hardware have better idle power draw numbers, or are they about the same here as well?

 

In your opinion, what would be the best new CPU for what I'm doing here, and then what older chip might be a better bang for my buck value? (this would help me focus my decision making and board hunting, as I'm still a bit all over the place).

 

Few ways to to go scout used parts. You can find a fully built system but that will cost more and are hard to find right now. What I did, and many do is just get a mobo, a cpu and ram all separately and put that in a desktop case. Servers are loud because of their form factor and density. But even then you can quiet then down. My buddy has a Supermicro 4U with all 24 bags full of drives and same mobo cpu and ram as me. We 3d printed a “wall” where the stock 80mm fans would go that supports 120mm fans, removed the loud AF redundant PSU’s in favor of a standard ATX PSU which is not loud at all, put a noctua heatsink made for 3U chassis with a fan (mist servers don’t have fans on the heatsink, the rely on the 10,000 rpm 80mm fans to move enough air to cool the heatsinks) and added noctua 80mm to the rear in place of the loud OEM 80’s. 
 

As far as what parts to get. That is more difficult. I first determined a new enough platform that will be powerful and somewhat efficient, which was LGA 2011, and I went with the newer version of that. Then from there I searched eBay for boards with all the features I wanted (lots of PCIe) and then CPU’s that are compatible. Looking back I didn’t need 28 threads, but… oh well. Lol. I used to run a bare metal pfsense box next to my homelab, so at least getting this cpu *very easily* let me just virtualize that which overall came out to about a wash in total power draw. If I did it again I’d likely find a lower watt chip, I think they have 20 thread versions that use a little less power. But I did build this with 8+ years of headroom in mind. I can double my RAM if needed and I can’t see myself needing 28 threads for a very long time. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×