Jump to content

Homelab Tech Tips

kriegalex

Homelab architecture  

11 members have voted

  1. 1. What do you prefer as a homelab architecture? (see text for context)

    • 1 "LTT sized" server to rule them all
      5
    • 1 storage focused NAS + cluster of light/small servers around (NUC, RPi, ...)
      3
    • 1 server/NAS per purpose with its purpose optimized local storage
      1
    • Other (answer)
      2


Hi,

I wanted to have your opinion in a homelab context. Imagine you have a few services you want to run, let's say media oriented services (plex, jellyfin, sonarr, torrent, ...), IT oriented (proxy, git, cloud, vpn, ...) and some others. Let's say it's not just one Synology with a few Word documents and a few Plex movies (simple case).

 

What would your strategy be ? I don't think there is a good answer btw :

 

  1. 1 "LTT sized" server to rule them all
    You just build a bonkers server with loads of drives, loads of CPU cores and decide on the go where to allocate all that storage + processing power. I can imagine that server running proxmox or linux directly.
  2. 1 storage focused NAS + cluster of servers/NAS around
    You build something like a storage oriented NAS then share the data where needed to other lighter/smaller servers : media, IT stuff, .... The various servers can be smaller, less power hungry and less expensive.
  3. 1 server per purpose with its local storage
    Each server is a NAS with its own storage. A media server would have big capacity HDDs, the one with IT services may have fast NVMe drives and they don't share storage. One other NAS could be dedicated for backups.

Let me know what is your strategy to organize your homelab !

 

 

Edited by kriegalex
clearer text

Gaming: Windows 10 - Intel i7 9900K - Asus RTX 2080 Strix OC - GIGABYTE Z390 AORUS MASTER - O11 Dynamic

Link to comment
Share on other sites

Link to post
Share on other sites

Im all for the I need xyz and making xyz+ability to add or try a on it if its almost of no extra cost.

 

Keeping this purpose built, organized and in vms keeps it easy to upgrade, change and modify without huge risk of fucking everything up.

 

I treat my setup as if I was profesionally doing it

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, jaslion said:

I treat my setup as if I was profesionally doing it

Playing devil's advocate here: when your professionally treated setup only hosts services than run for 1 or 2 users, it becomes tempting to start aggregating everything into one server. Maybe the issue comes from here.

 

You may not want a dedicated server for GIT and a dedicated server for the GIT PostgreSQL DB (each optimized for the usage), since 3 users would login 3x per month working on ~10 projects maximum (example).

 

But then again someone with money to throw in his/her homelab could do it. I guess it comes down to budget and time.

 

Gaming: Windows 10 - Intel i7 9900K - Asus RTX 2080 Strix OC - GIGABYTE Z390 AORUS MASTER - O11 Dynamic

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, kriegalex said:

it becomes tempting to start aggregating everything into one server. Maybe the issue comes from here.

I run everything on 2 systems

 

1 weak af low powered nas and 1 everything else machine with vms. Irs all aggregated and I basically have a virtual version of how I would physically do it.

 

It works fine and keeps it compact and efficient

 

I do the same in a proffesional enviroment when possible.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, jaslion said:

1 weak af low powered nas

So this one doesn't serve any data to the VM one ? It only hosts like basic files/pictures/videos ?

Gaming: Windows 10 - Intel i7 9900K - Asus RTX 2080 Strix OC - GIGABYTE Z390 AORUS MASTER - O11 Dynamic

Link to comment
Share on other sites

Link to post
Share on other sites

Not sure how to categorize my hardware hoard... I've only got one server hosting software (and storing data) that I care about, but I've also got "real" servers to back it up...

 

"Production" System:

 

1x PowerEdge R730XD (primary NAS and storage server):

  • TrueNAS Scale
  • 2x Xeon E5-2637 v4
  • 256 GB DDR4 2400 (8x 32GB)
  • 12x 12 TB WD Red-equivalent shucked drives (one RAIDz3 vdev)
  • Broadcom 10gbe/1gbe Ethernet mezzanine card
  • Intel P3605 1.6 TB NVMe SSD (Plex database and DVR)
  • Intel P3605 1.6 TB NVMe SSD (assorted container applications)
  • Nvidia Quadro P2000 (Plex transcoding)

This might get replaced with a whitebox sooner than later. It's loud and it drinks down about 200 watts.

 

 

"Lab" systems:

 

2x PowerEdge R730s:

  • Proxmox
  • 2x Xeon E5-2697 v4 (18c 36t)
  • 512 GB DDR4 2133 Reg ECC (16x 32 GB)
  • Broadcom 10gbe/1gbe Ethernet mezzanine card
  • 2x 1000w 80+ platinum power supplies
  • 2x 120 GB Intel SATA SSD (boot mirror)
  • Sun/Oracle F80 400 GB (ZFS caching)

These are still a work in progress.

 

1x Precision Rack R7910:

  • Windows 10 Pro
  • 2x Xeon E5-2637 v4
  • 2c 1000w 80 plus platinum power supplies
  • 256 GB DDR4 2400 (8x 32GB)
  • Broadcom 10gbe/1gbe Ethernet mezzanine card
  • 2x RTX 3060 12 GB
  • 8x 200 GB HGST SAS SSDs

This one's for playing with machine learning software, mostly Stable Diffusion and Topaz Labs video enhance AI.

 

3x PowerEdge R620:

  • Proxmox
  • 2x Xeon E5-2680 v2
  • 2x 750w 80 plus power supplies
  • 256 GB DDR3 1600 (16x 16 GB)
  • Broadcom 10gbe/1gbe Ethernet mezzanine card
  • Mellanox ConnectX-3 40gb InfiniBand NIC
  • 2x 120 GB Intel SSD (boot mirror)

1x PowerEdge R720XD:

  • TrueNAS Scale
  • 2x 1000w 80 plus power supplies
  • 256 GB DDR3 1600 (16x 16GB)
  • Broadcom 10gbe/1gbe Ethernet mezzanine card
  • Mellanox ConnectX-3 40gb dual port InfiniBand card
  • 2x 120 GB Intel SSD (boot mirror)
  • 24x 600gb 10k SAS HDD (storage, 4x 6-drive RAIDz1 vdevs)

 

This is the "old" cluster, mostly cobbled together with cheap barebones and "parts or not working" systems. I should probably liquidate these.

 

This doesn't count the menagerie of random desktops, "retro" machines, and Cisco routers I've amassed over the last few years...

 

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

I think my go-to setup would be a 3 server setup. The same idea as jaslion but with a twist.

 

  1. A media server with the minimum specs possible to not explode when transcoding must happen. If money is available, might use a P2000 GPU for that. The server is focused on having high TB storage. May use unRAID since hosted media is not critical and it allows to still have the data on the other HDDs in case of a failure.
  2. A smaller more powerful server with faster storage, that is suited for a wide range of docker/VMs. Most don't require as many GB/TB as a media server, so a good thing since NVMe is more expensive than HDDs. In case you end up with 1TB of baby videos on the Nextcloud docker,/VM you can always offload that on the media server 🙂
  3. A very low powered server with only basic storage for any 24/7 non "critical" services that may cause the server 2. to wear down the storage quickly for nothing. This could be any service that involves sharing data almost 24/7 (p2p, decentralized nodes, ...)

Gaming: Windows 10 - Intel i7 9900K - Asus RTX 2080 Strix OC - GIGABYTE Z390 AORUS MASTER - O11 Dynamic

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, kriegalex said:

So this one doesn't serve any data to the VM one ? It only hosts like basic files/pictures/videos ?

Oh it does it has a 10 gb/s link and some vms use it as a mounted disk

Link to comment
Share on other sites

Link to post
Share on other sites

It's really depending on your usage, there isn't really a simple answer to the question. I have two servers running. The first is my "primary" it handles my storage/backups, plex, minecraft, home assistant, vpn, generic windows vm. While the other backs up the data on the first and runs a windows vm to handle my security cameras and ubiquiti controller (yes windows, I know... but blue iris only runs on windows). While the first system could  have resources to also run my cameras and networking. I view them as being a critical service and prefer to run them on a seperate machine rather then pushing them all to a single system and increasing a potential of an outage. Worst case if one system goes down I can spin up the services on the other and I have a second copy of my data if the worst were to happen to one of the systems.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×