Jump to content

What should I build/which OS should I use?

Hey folks!

Moving into a new place, which means a server rebuild!

Here's what I do, roughly in order of priority

  • Home media streaming and file storage (15 TB at this point)

  • Downloading

  • In-network file storage and sharing with Windows computers

  • Self-hosted cloud

  • Home automation/metrics

  • UniFi controller

  • Backup

I'm pretty comfortable with Linux CLI and Docker, and with the smarthomebeginner.com guides. But also, should I just go with Windows this time to keep it simple with respect to storing and sharing files within the house?

Also curious, if I do go with Ubuntu, what should I do about RAID? Hardware? Software? ZFS? I feel pretty neutral about these things.

I know I want to do a rack-mounted server thing, with lots of hard drives, and also I have a Quadro card in my current server for transcoding and such.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm pretty happy with Unraid for my NAS so far. It runs 19 TB of storage and covers my Plex server, Nextcloud instance, random archival storage/backup and a database for my weather station through Dockers.

 

Don't know if it supports home automation and UniFi controller though.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

If you want an easy-to-use WebUI then TrueNAS Core (or TrueNAS SCALE if you want native Docker support instead of using FreeBSD Jails) is a good choice for operating system. You could also opt for a standard FreeBSD install, or a Linux distribution that works well as a server, such as Debian, RHEL (or its derivatives), etc... and do everything from the command line if you are comfortable with that. Ubuntu Server has ZFS built-in unlike other Linux distributions which can be an advantage in some cases.

 

I would avoid hardware RAID unless you have a specific use case which requires it. Most applications nowadays benefit from the simplicity, flexibility and portability of software RAID. ZFS is a really good choice - it combines what would usually have been several different storage layers managed by separate tools into one unified stack, making things simple, reliable and fast. Ensure that the controller you connect your hard drives to is a regular SATA/SAS controller or HBA (not a RAID controller) to ensure that ZFS can communicate to the disks directly. Onboard motherboard SATA controllers are fine.

 

I have run Ubuntu Server, TrueNAS Core, and FreeBSD on my servers at various times. All have worked great, some better than others for certain tasks. You can always take each option for a spin inside a VM and pretend it's your server so that you can decide which option works best for you. You can literally create a "mini" virtual version of your server with several small virtual disks and set up all your services such as Plex, UniFi controllers, etc... in that environment and whichever option you find makes most sense will become obvious after taking them all for a spin.

Workstation:

Intel Core i7 6700K | AMD Radeon R9 390X | 16 GB RAM

Mobile Workstation:

MacBook Pro 15" (2017) | Intel Core i7 7820HQ | AMD Radeon Pro 560 | 16 GB RAM

Link to comment
Share on other sites

Link to post
Share on other sites

I’m a bit proponent of proxmox as a hypervisor and virtualize under that. 
 

I run truenas under proxmox, multiple Ubuntu VM’s one of which hosts docker, one hosts Plex, home assistant, windows VM, the world is your poster with virtualization. And I also highly recommend ZFS, but you need new drives you can dedicate to a new vdev, you can’t just use drives with data on them… and adding storage space after the fact isn’t super simple. ZFS is meant to be deployed in the end state from the start. I would give the truenas forums a good read and see if that is right for you. If not, unraid is a great option. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, rarifiedbovine said:

Hey folks!

Moving into a new place, which means a server rebuild!

Here's what I do, roughly in order of priority

  • Home media streaming and file storage (15 TB at this point)

  • Downloading

  • In-network file storage and sharing with Windows computers

  • Self-hosted cloud

  • Home automation/metrics

  • UniFi controller

  • Backup

I'm pretty comfortable with Linux CLI and Docker, and with the smarthomebeginner.com guides. But also, should I just go with Windows this time to keep it simple with respect to storing and sharing files within the house?

Also curious, if I do go with Ubuntu, what should I do about RAID? Hardware? Software? ZFS? I feel pretty neutral about these things.

I know I want to do a rack-mounted server thing, with lots of hard drives, and also I have a Quadro card in my current server for transcoding and such.

What were you using in your old house? What was wrong with it? Why do you feel the need to rebuild?

 

I'd start with these three questions and go from there.

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/3/2022 at 3:08 AM, tikker said:

I'm pretty happy with Unraid for my NAS so far. It runs 19 TB of storage and covers my Plex server, Nextcloud instance, random archival storage/backup and a database for my weather station through Dockers.

 

Don't know if it supports home automation and UniFi controller though.

How good is it at being available through Windows explorer as a network drive?

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/3/2022 at 3:14 AM, Husky said:

If you want an easy-to-use WebUI then TrueNAS Core (or TrueNAS SCALE if you want native Docker support instead of using FreeBSD Jails) is a good choice for operating system. You could also opt for a standard FreeBSD install, or a Linux distribution that works well as a server, such as Debian, RHEL (or its derivatives), etc... and do everything from the command line if you are comfortable with that. Ubuntu Server has ZFS built-in unlike other Linux distributions which can be an advantage in some cases.

 

I would avoid hardware RAID unless you have a specific use case which requires it. Most applications nowadays benefit from the simplicity, flexibility and portability of software RAID. ZFS is a really good choice - it combines what would usually have been several different storage layers managed by separate tools into one unified stack, making things simple, reliable and fast. Ensure that the controller you connect your hard drives to is a regular SATA/SAS controller or HBA (not a RAID controller) to ensure that ZFS can communicate to the disks directly. Onboard motherboard SATA controllers are fine.

 

I have run Ubuntu Server, TrueNAS Core, and FreeBSD on my servers at various times. All have worked great, some better than others for certain tasks. You can always take each option for a spin inside a VM and pretend it's your server so that you can decide which option works best for you. You can literally create a "mini" virtual version of your server with several small virtual disks and set up all your services such as Plex, UniFi controllers, etc... in that environment and whichever option you find makes most sense will become obvious after taking them all for a spin.

Do you follow the 1 GB of RAM per TB of storage for ZFS? Any tips on how to make sure that "ZFS can communicate to the disks directly?" Any more context on why you prefer TrueNAS Core (and, consequently, FreeBSD jails) to TrueNAS SCALE? Are any better than others at avoiding the bane of my existence--permissions when dealing with windows machines?

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/3/2022 at 6:55 AM, Blue4130 said:

What were you using in your old house? What was wrong with it? Why do you feel the need to rebuild?

 

I'd start with these three questions and go from there.

I was using debian with Docker containers for the apps, mergerFS and SnapRaid. The main things I don't like are:

  • I lack the CLI skills to really go under the hood with snapraid and make sure things are getting backed up when I chroned them too and that drives aren't failing
  • CLI is just a bit too fiddly
  • I get occasional permissions issues where one docker container fails to play nicely with another (e.g., the downloader can't correctly folder stuff according to rules established by the PVR; my music management software can't rename files until I go and re-777 the folder it is targeting), and permissions issues where something running on my windows machine, like Windows Explorer or some other software, can't see or play with files in the media share. 
  • I occasionally find that it is way too slow when I'm trying to work on files en masse, but that may be a hardware issue
Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, rarifiedbovine said:

How good is it at being available through Windows explorer as a network drive?

Pretty straightforward. Unraid lets you make the shares available as SMB shares. You set the read/write permissions and then simply pointing Windows to it using SMB.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, rarifiedbovine said:

Do you follow the 1 GB of RAM per TB of storage for ZFS? Any tips on how to make sure that "ZFS can communicate to the disks directly?" Any more context on why you prefer TrueNAS Core (and, consequently, FreeBSD jails) to TrueNAS SCALE? Are any better than others at avoiding the bane of my existence--permissions when dealing with windows machines?

The RAM rule is not quite accurate and is greatly overblown. You can run ZFS on a system with 128 MB of RAM if you want to and it will work. However, the more RAM you have, the more RAM ZFS can use as cache which can speed things up. For example, for a home user, 4 GB of RAM would be enough for a few TBs of storage. Remember that this RAM is not reserved, the OS will still be able to use this memory for other software as needed as it is simply cache memory and can be overwritten at any time.

 

You can get ZFS to communicate to the disks directly by simply making sure that the disks are connected to a standard "dumb" SATA/SAS controller or HBA, and NOT a RAID controller. In addition, make sure that the SATA/SAS controller or HBA is in regular SATA AHCI (or SAS SCSI) mode (NOT RAID, Intel Matrix, ESRT2, vROC, etc...) and that's it. The SATA ports built-in to motherboards work perfectly fine for this as long as RAID mode is disabled and AHCI enabled in the UEFI/BIOS.

 

I prefer TrueNAS Core simply because it's more mature than TrueNAS SCALE, which is a recent development. Apparently SCALE is stable and ready for use but I'm cautious especially when it comes to server stuff. I prefer FreeBSD Jails personally because I just feel more comfortable with managing them due to how simple and straightforward they are. Docker is more popular though, and most modern server apps have a Docker container ready-to-go which is often times not the case with FreeBSD Jails. TrueNAS makes this easier though since they have "plugins" which is a fancy name for "jails that are already setup and ready to go in a few clicks". I would honestly just take a few OSes for a test drive in a VM and then you can make your decision. Some of the recommendations by other forum members are also good options such as unRAID or Proxmox.

 

As for permissions when dealing with Windows machines, it's really not too bad to get right once you've got your head wrapped around how UNIX permissions and POSIX ACLs work. An easy way would be to create a group (example: media) and then add everyone that needs access to that group (your own user, daemon users such as plex, etc...) and then chown all files to be owned by that group, chmod something like 2770 (rwxrws---), and set ACLs to force all files to be owned by that same group. There are also settings in Samba (SMB file sharing server) to force all files to have certain permissions and owners which are quite handy. TrueNAS has a Web UI for you to do these operations with so it shouldn't be too difficult.

Workstation:

Intel Core i7 6700K | AMD Radeon R9 390X | 16 GB RAM

Mobile Workstation:

MacBook Pro 15" (2017) | Intel Core i7 7820HQ | AMD Radeon Pro 560 | 16 GB RAM

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/2/2022 at 11:44 AM, rarifiedbovine said:

 

I'm pretty comfortable with Linux CLI and Docker, and with the smarthomebeginner.com guides. But also, should I just go with Windows this time to keep it simple with respect to storing and sharing files within the house?

Also curious, if I do go with Ubuntu, what should I do about RAID? Hardware? Software? ZFS? I feel pretty neutral about these things.

I know I want to do a rack-mounted server thing, with lots of hard drives, and also I have a Quadro card in my current server for transcoding and such.

If you want to build up your own NAS instead of just installing a NAS distro, then ubuntu is nice for that. Add to it LXD containers (along with KVM virtualization) and you can run anything using the LXC command set.  Even a NAS distro or the HAOS (home assistant distro).  LXD couples really well with ZFS for things like snapshots and so I run ubuntu server minimal as the host OS, and the use LXD to run Plex, Syncthing, Pi-Hole and HAOS all as containers.  I recently set up a fanless mini-pc as a DIY router and I was able to move the Pi-hole and HAOS containers over in a few minutes. 

This is not the simple path, but if you like this kind of OS/network tweaking, I think it is the most fun path. Some people like to make their own furniture, so why not a media server, NAS, and even a router? 

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/3/2022 at 3:40 AM, LIGISTX said:

I’m a bit proponent of proxmox as a hypervisor and virtualize under that. 
 

I run truenas under proxmox, multiple Ubuntu VM’s one of which hosts docker, one hosts Plex, home assistant, windows VM, the world is your poster with virtualization. And I also highly recommend ZFS, but you need new drives you can dedicate to a new vdev, you can’t just use drives with data on them… and adding storage space after the fact isn’t super simple. ZFS is meant to be deployed in the end state from the start. I would give the truenas forums a good read and see if that is right for you. If not, unraid is a great option. 

Why use truenas underproxmox plus all those VMs instead of, for example, using truenas as the hypervision and running apps within freebsd jails/docker containers (in truenas scale)

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, rarifiedbovine said:

Why use truenas underproxmox plus all those VMs instead of, for example, using truenas as the hypervision and running apps within freebsd jails/docker containers (in truenas scale)

Scale actually is Debian, core is FreeBSD. FreeBSD’s “containers” and “hypervisor” are pretty garbage, so definitely don’t use core for this. Scale however does use KVM, same as proxmox…. But…..

 

Proxmox is to virtualization is Truenas is to storage.  Basically, truenas scale is being developed to have virtualization and containerization built in and natively easy to use, but it is first and foremost a storage appliance. It wants to be a strange box.

 

Proxmox is first and foremost a nice simple gui to use the KVM hypervisor. Proxmox can also be a NAS… it supports ZFS, and being that it is Debian, you can rather easily create SMB shares and use it as a NAS. 
 

But the point is, use each for what they are good at. You could do both of these things at the same time under Ubuntu as well (as it’s Debian, plenty of guides online to set up Ubuntu as a nas and as a hypervisor) but that would take a lot of work that the proxmox team and truenas team have already perfected. 
 

Plus, proxmox gives you a lot more flexibility so as you grow as a homelaber, you can expand and have basically endless options, whereas truenas “likes to be a storage appliance” and will be limiting the more advanced you get.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, LIGISTX said:

Scale actually is Debian, core is FreeBSD. FreeBSD’s “containers” and “hypervisor” are pretty garbage, so definitely don’t use core for this. Scale however does use KVM, same as proxmox…. But…..

 

Proxmox is to virtualization is Truenas is to storage.  Basically, truenas scale is being developed to have virtualization and containerization built in and natively easy to use, but it is first and foremost a storage appliance. It wants to be a strange box.

 

Proxmox is first and foremost a nice simple gui to use the KVM hypervisor. Proxmox can also be a NAS… it supports ZFS, and being that it is Debian, you can rather easily create SMB shares and use it as a NAS. 
 

But the point is, use each for what they are good at. You could do both of these things at the same time under Ubuntu as well (as it’s Debian, plenty of guides online to set up Ubuntu as a nas and as a hypervisor) but that would take a lot of work that the proxmox team and truenas team have already perfected. 
 

Plus, proxmox gives you a lot more flexibility so as you grow as a homelaber, you can expand and have basically endless options, whereas truenas “likes to be a storage appliance” and will be limiting the more advanced you get.

Gotcha. And I assume that there are no issues with, e.g., setting up your *arr apps in one VM, and your storage appliance in truenas on another, and having the former use/communicate with the latter for its data storage?

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, rarifiedbovine said:

Gotcha. And I assume that there are no issues with, e.g., setting up your *arr apps in one VM, and your storage appliance in truenas on another, and having the former use/communicate with the latter for its data storage?

Definitely not a problem. 
 

Think of any enterprise infrastructure… all data is stored on some server, clients weather they be windows PC’s with people using them, or other physical servers reaching out to use the data over the network to crunch on, or VM’s on some big boy server…. Storage is almost always held within an appliance, and the data is accessed over some sort of network protocol. Sometimes you do have directly attached storage when latency of retrieving the data is more important then the costs associated with doing it this way, but that is for Google’s or Amazon’s of the world, or machine learning etc. 

 

For us normal people, and even relatively large enterprises, storage on a “storage appliance” is how it’s handled, and things are shared over the network, weather it very SMB over our standard gigabit network, or fiber channel, or some crazy new PCIe network storage stuff that exists, it’s still going over the network. For your situation, you will run truenas in a VM (you will need an HBA to pass through to the VM so you can actually access the harddrives in Freenas, but an HBA is like 30-40 bucks on eBay with cables), and you will run your ‘arr’s likely in an Ubuntu VM running docker and have them as docker containers. That Ubuntu VM will have mounted network shares to truenas, and docker at those, and boom, your good to go. 
 

This may sound sort of complex and may be things you don’t fully grasp yet, but that’s the fun of homelab… I didn’t get this a few years ago, yet here I am 🙂  

 

 

that may help, I’m not sure. Never watched it… but even if it doesn’t use anything that is networked and goes to a directly attached harddrive or something, getting a network share attached is no harder. You just mount a SMB share as something like /mnt/movies and now when you go to that directory…. It’s just showing you what is in truenas. So the docker container will just see that shared data. It doesn’t care where it lives. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LIGISTX said:

This may sound sort of complex and may be things you don’t fully grasp yet, but that’s the fun of homelab… I didn’t get this a few years ago, yet here I am 🙂  

Word, thank you. This is actually quite helpful. My bete noire in the last few iterations has been persistent permissions problems/resetting, so that's the thing I'm most scared of. Yet I'm really curious and excited to try.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, rarifiedbovine said:

Word, thank you. This is actually quite helpful. My bete noire in the last few iterations has been persistent permissions problems/resetting, so that's the thing I'm most scared of. Yet I'm really curious and excited to try.

Permissions problems with what? What have you tried already that is having issues?

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, LIGISTX said:

This may sound sort of complex and may be things you don’t fully grasp yet, but that’s the fun of homelab… I didn’t get this a few years ago, yet here I am 🙂  

OK, let's talk hardware. 🙂

 

What should I be thinking about? Is there a helpful guide to which you can point me? The hardware recs I've found online haven't been helpful. What questions should I answer for myself to help with the hardware discussion?

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, rarifiedbovine said:

OK, let's talk hardware. 🙂

 

What should I be thinking about? Is there a helpful guide to which you can point me? The hardware recs I've found online haven't been helpful. What questions should I answer for myself to help with the hardware discussion?

What are you trying to do? I ran my homelab on a i3 6100 for YEARS, only a few months ago did I upgrade. And that supported multiple ubuntu VM's, docker containers, truenas, a windows LTSC, plex was on a ubuntu VM..... and the CPU never had much of an issue at all. RAM was starting to run low thus I did a full overhaul, now running 64GB which is more then enuogh.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, LIGISTX said:

What are you trying to do? I ran my homelab on a i3 6100 for YEARS, only a few months ago did I upgrade. And that supported multiple ubuntu VM's, docker containers, truenas, a windows LTSC, plex was on a ubuntu VM..... and the CPU never had much of an issue at all. RAM was starting to run low thus I did a full overhaul, now running 64GB which is more then enuogh.

I'd like to do:

  • Rack-mounted everything, for the fun factor
  • ubuntu VMs (unless you recommend one or more of these be proxmox-hosted docker containers instead:
    • unibiquiti controller
    • pihole
    • home assistant (I think)
    • *arrs
    • jellyfin with my quadro passed through for transcoding
    • nextcloud or something similar
    • "master" Kodi instance
  • TrueNAS 
    • would like to do around 80TB storage
  • some sort of cloud backup for the above
  • windows for some windows-only hosted apps
Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, rarifiedbovine said:

I'd like to do:

  • Rack-mounted everything, for the fun factor
  • ubuntu VMs (unless you recommend one or more of these be proxmox-hosted docker containers instead:
    • unibiquiti controller
    • pihole
    • home assistant (I think)
    • *arrs
    • jellyfin with my quadro passed through for transcoding
    • nextcloud or something similar
    • "master" Kodi instance
  • TrueNAS 
    • would like to do around 80TB storage
  • some sort of cloud backup for the above
  • windows for some windows-only hosted apps

I am a fan of used serve gear, the homelab in my sig, besides drives, would cost about 600 bucks to put together, but it’s not racked. Right now it’s hard to find used Supermicro chassis at a good price… 

 

I run multiple Ubuntu VM’s one of which hosts docker containers, an LXC container that runs UniFi controller, VM’s for fr24, Plex under Ubuntu, home assistant OS, windows LTSC, pfsense, truenas, a few other random things, and I’m using under 42 GB of RAM and CPU is usually next to nothing utilization.  

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LIGISTX said:

I am a fan of used serve gear, the homelab in my sig, besides drives, would cost about 600 bucks to put together, but it’s not racked. Right now it’s hard to find used Supermicro chassis at a good price… 

 

I run multiple Ubuntu VM’s one of which hosts docker containers, an LXC container that runs UniFi controller, VM’s for fr24, Plex under Ubuntu, home assistant OS, windows LTSC, pfsense, truenas, a few other random things, and I’m using under 42 GB of RAM and CPU is usually next to nothing utilization.  

Couple questions:

  • What's the heuristic for which apps to put into the Ubuntu VM for docker containers, and which to run on their own VMs? 
  • Why an LXC container for UniFi (instead of Ubuntu VM?)
  • What do you use Windows LTSC for?
  • Looks like you have two RAID controllers in the system? I assume you're not using hardware raid though, right? How are they allocated?
  • If not too personal...why run fr24?
Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, rarifiedbovine said:

Couple questions:

  • What's the heuristic for which apps to put into the Ubuntu VM for docker containers, and which to run on their own VMs? 
  • Why an LXC container for UniFi (instead of Ubuntu VM?)
  • What do you use Windows LTSC for?
  • Looks like you have two RAID controllers in the system? I assume you're not using hardware raid though, right? How are they allocated?
  • If not too personal...why run fr24?

There is no real "correct" way to do anything, so what goes in what VM vs docker vs LXC is really subjective and depends on the application. Some things will not be super happy in LXC since I believe LXC's are "part of the host OS", so in this case proxmox, but live in their own namespace; some things (I don't really have a list of what does vs what doesn't) like this very much. LCXC's since they "are containers running same kernel" as the host from my understanding, use less resources so that is a plus of using them.

 

I run unifi in one because it seems to work fine, no reason to dedicate an entire other OS just for a simple network hardware controller. Seems to work fine, so why not.

LTSC is just a cut down version of Win 10, I used to use it for Veeam backup when I ran ESXi, but now that proxmox has its own backup solution that runs as its own VM, I just use that. But its always nice to just have a win 10 VM, I dno't really use it for anything at the moment though.

 

You NEVER want to use hardware RAID with ZFS. And by that I mean you don't even want the RAID card acting as a RAID card, you need to flash it to IT mode so it doesn't do anything besides present the drives to truenas. ZFS needs bare metal access to the drives to be able to work correct since RAID cars quite literally lie to the host OS about harddrive state and health - they do this because they are controlling things and just lie to the OS... but ZFS is much better then hardware RAID for almost all applications. So you need a RAID card flashed to IT mode (The dell H310 I have is easily found on ebay for cheap pre-flashed, or you can do it yourself), you don't need the specific Dell card, its just he Dell version of the LSI 922-8i, there are many of them. I only have a single one, and a SAS expander to break out to more SATA ports. A single SAS can be broken out to 4 SATA via a break out cable, but a single SAS card can address MANY drives if you have a SAS expander. A 24 bay 4u supermicro chassis will typically run off a single SAS card, and the backplane they "hot swap" into is a sas expander. Since I run in a standard case, I just got a PCIe powered (it doesn't actually transfer any data through PCIe) expander and run 2 SAS cables from the H310 to it, and then break out the remaining 3 SAS ports to x4 SATA... I could just do 1 single SAS cable from H310 to expander and gain another x4 SATA, but I just don't need that many so I figured why not just give it more SAS lanes, not that it likely matters anyways.

 

If you host your own fr24 receiver, you get free access to the paid tier, so why not.

 

If I was to start over, I would likely just run plex as a docker container in my ubuntu docker VM... I have no need to transcode anything serious so I just use CPU to do that since I 99% of the time direct play locally anyways which uses no resources, but if you need to transcode you may have different requirements here. But I would use docker for easier portability... docker really is awesome. I use to run pihole in a docker container, but I just let pfsense handle all of that now. But..................... don't virtualize your router until you are much more accustomed and used to this stuff. I waited about 3 years before I felt comfortable making that jump, and so far it has been pretty much fine, but I have a good bit of knowledge now so it wasn't too bad. Took about 2 years of using pfsense just to feel good enough with it, and even more years of homelabbing before I felt comfortable actually virtualizing pfsense.

 

I actually started out with my i3 6100 with bare metal freenas, that lasted maybe 2 years at which point I went ESXi with freenas virtual with 1 or 2 ubuntu VM's. After a few years of that I outgrew the free tier ESXi and was sort of over that ecosystem and swapped to proxmox for the freeware nature of it and the fact its based on debian and not proprietary; its been a great switch which happened when I switched from the i3 to the xeon a few months ago. For reference, the i3 did support ECC, so I have 28GB of ECC RAM for YEARS without much issue. But, it was time to make a jump up in performance, and I found some good deals and made the switch. 28 threads has proven to be way overkill, but, #yolo I guess. Screenshot is literally current system load, the CPU was def way overkill, honestly its just producing more heat then needed and wasting energy lol. If I was to do that over...... I may have opted for something like 20 threads tops, although I would like to start doing some IP camera stuff and there are plugins for home assistant that let you do facial recognition and what not which DOES take some CPU horsepower.

 

With all of this, I run multiple subnets, all via vlans managed by pfsense. It has been a great learning experience and I have enjoyed it a lot. 

 

image.png.fb8bfdc7f5ed9623ef357ce1e64bea7c.png

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LIGISTX said:

There is no real "correct" way to do anything, so what goes in what VM vs docker vs LXC is really subjective and depends on the application. Some things will not be super happy in LXC since I believe LXC's are "part of the host OS", so in this case proxmox, but live in their own namespace; some things (I don't really have a list of what does vs what doesn't) like this very much. LCXC's since they "are containers running same kernel" as the host from my understanding, use less resources so that is a plus of using them.

 

I run unifi in one because it seems to work fine, no reason to dedicate an entire other OS just for a simple network hardware controller. Seems to work fine, so why not.

LTSC is just a cut down version of Win 10, I used to use it for Veeam backup when I ran ESXi, but now that proxmox has its own backup solution that runs as its own VM, I just use that. But its always nice to just have a win 10 VM, I dno't really use it for anything at the moment though.

 

You NEVER want to use hardware RAID with ZFS. And by that I mean you don't even want the RAID card acting as a RAID card, you need to flash it to IT mode so it doesn't do anything besides present the drives to truenas. ZFS needs bare metal access to the drives to be able to work correct since RAID cars quite literally lie to the host OS about harddrive state and health - they do this because they are controlling things and just lie to the OS... but ZFS is much better then hardware RAID for almost all applications. So you need a RAID card flashed to IT mode (The dell H310 I have is easily found on ebay for cheap pre-flashed, or you can do it yourself), you don't need the specific Dell card, its just he Dell version of the LSI 922-8i, there are many of them. I only have a single one, and a SAS expander to break out to more SATA ports. A single SAS can be broken out to 4 SATA via a break out cable, but a single SAS card can address MANY drives if you have a SAS expander. A 24 bay 4u supermicro chassis will typically run off a single SAS card, and the backplane they "hot swap" into is a sas expander. Since I run in a standard case, I just got a PCIe powered (it doesn't actually transfer any data through PCIe) expander and run 2 SAS cables from the H310 to it, and then break out the remaining 3 SAS ports to x4 SATA... I could just do 1 single SAS cable from H310 to expander and gain another x4 SATA, but I just don't need that many so I figured why not just give it more SAS lanes, not that it likely matters anyways.

 

If you host your own fr24 receiver, you get free access to the paid tier, so why not.

 

If I was to start over, I would likely just run plex as a docker container in my ubuntu docker VM... I have no need to transcode anything serious so I just use CPU to do that since I 99% of the time direct play locally anyways which uses no resources, but if you need to transcode you may have different requirements here. But I would use docker for easier portability... docker really is awesome. I use to run pihole in a docker container, but I just let pfsense handle all of that now. But..................... don't virtualize your router until you are much more accustomed and used to this stuff. I waited about 3 years before I felt comfortable making that jump, and so far it has been pretty much fine, but I have a good bit of knowledge now so it wasn't too bad. Took about 2 years of using pfsense just to feel good enough with it, and even more years of homelabbing before I felt comfortable actually virtualizing pfsense.

 

I actually started out with my i3 6100 with bare metal freenas, that lasted maybe 2 years at which point I went ESXi with freenas virtual with 1 or 2 ubuntu VM's. After a few years of that I outgrew the free tier ESXi and was sort of over that ecosystem and swapped to proxmox for the freeware nature of it and the fact its based on debian and not proprietary; its been a great switch which happened when I switched from the i3 to the xeon a few months ago. For reference, the i3 did support ECC, so I have 28GB of ECC RAM for YEARS without much issue. But, it was time to make a jump up in performance, and I found some good deals and made the switch. 28 threads has proven to be way overkill, but, #yolo I guess. Screenshot is literally current system load, the CPU was def way overkill, honestly its just producing more heat then needed and wasting energy lol. If I was to do that over...... I may have opted for something like 20 threads tops, although I would like to start doing some IP camera stuff and there are plugins for home assistant that let you do facial recognition and what not which DOES take some CPU horsepower.

 

With all of this, I run multiple subnets, all via vlans managed by pfsense. It has been a great learning experience and I have enjoyed it a lot. 

 

image.png.fb8bfdc7f5ed9623ef357ce1e64bea7c.png

Wow, thanks...Very helpful!

 

So you run pfsense (rather than a unifi router) together with some other unifi hardware?

How does backup on proxmox work? Do you do any on-site physical or off-site backups as well?

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, rarifiedbovine said:

Wow, thanks...Very helpful!

 

So you run pfsense (rather than a unifi router) together with some other unifi hardware?

How does backup on proxmox work? Do you do any on-site physical or off-site backups as well?

Yes, UniFi for switches and AP’s, pfsense for firewall/router appliance. Pretty standard to do this since pfsense is so phenomenal at its job, and if free, and UniFi stuff is affordable and pretty solid gear. 
 

Proxmox has an in house backup solution, called Proxmox backup sever. I have a NFS mount from truenas mounted and I backup to there. It just snapshots the VM’s, simple but effective. 
 

I backup critical data to backblaze b2 via truenas, it’s integrated directly into truenas. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×