Jump to content

Design strategy for setting up home server

Hi team,

 

As mentioned in my earlier post, i got a second hand old pc for 140 dollars and following is the config:

 

ASUS B150M A/M.2

i5 6500

16 gb crucial 2133mhz ram

Sandisk 240 gb ssd

Quadro K1200 gpu

 

I have ordered 3X4TB ironwolf drives which I plan to setup in RAID 5 to get 8TB usage storage.

I have a bunch of photos (around 250GB) to back up as of now which I need to download from google photos and store in the NAS. 

The purpose of this PC is to run 24X7 to serve as a file server, media server, photo viewer, and I also plan on running CI/CD services as I am picking up devops technologies to strengthen my resume (I plan on moving tracks from software automation testing to devops).

Here is the design I am planning to follow

 

1. install proxmox as the hypervisor on my pC

2. install truenas scale VM to setup the 3 drives in RAID 5

3. install debian (light weight OS) to setup all docker containers 

4. install rancher, wireguard, duckdns, photoprism, jellyfin in the debian

 

Further, explore gitlab CI pipeline setup on the server for learning purposes etc.

 

Any suggestions with this approach? any other recommended way I should proceed with configuring my homelab?

Link to comment
Share on other sites

Link to post
Share on other sites

This build would work pretty well with all of your intended applications. I've noticed, however, that only 2 VMs were planned to deploy, and since most of these services can be deployed upon Kubernetes/Docker apps, the introduction of Proxmox may be of limited value. I would therefore suggest a bare-metal installation of Debian-based TrueNAS Scale, hosting all services and possible VMs.

 

Also, you may wish to make transcoding with Jellyfin and make face recognition with PhotoPrism. In such cases, keep the integrated GPU enabled in BIOS settings, then assign it to Jellyfin, with the Quadro assigned to the container of PhotoPrism.🤔

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Bersella AI said:

This build would work pretty well with all of your intended applications. I've noticed, however, that only 2 VMs were planned to deploy, and since most of these services can be deployed upon Kubernetes/Docker apps, the introduction of Proxmox may be of limited value. I would therefore suggest a bare-metal installation of Debian-based TrueNAS Scale, hosting all services and possible VMs.

 

Also, you may wish to make transcoding with Jellyfin and make face recognition with PhotoPrism. In such cases, keep the integrated GPU enabled in BIOS settings, then assign it to Jellyfin, with the Quadro assigned to the container of PhotoPrism.🤔

perfect makes sense. thanks for your prompt reply.

i will leverage the iGPU for jellyfin and quadro for face recognition. this would utilize all my hardware.

Link to comment
Share on other sites

Link to post
Share on other sites

Why are you buying a k series quadro? I think you are going to have a lot of issues finding support for this hardware, nvidia stopped supporting it long ago. Are you sure it'll work for you? I just purchased a p400 for jellyfin but wouldn't go older than pascal (p series)... even on linux older nvidia stuff is pretty dicey at this point. 

 

I also don't think virtualization on a quad core cpu of this era is realistic. You will get a single dual core vm with 12gb ram maximum (I wouldn't provision anything beyond that given your specs) - what is the point? Instead, you could teach yourself Docker... Anyway that is just my opinion. 

 

If you want a development environment, buy a rpi3 or 4 for $50. You should not be trying to run dev and prod on the same physical machine (which is what it sounds like to me) 😉 or just run a virtual machine on your laptop. 

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, TheDarkCanuck said:

Why are you buying a k series quadro? I think you are going to have a lot of issues finding support for this hardware, nvidia stopped supporting it long ago. Are you sure it'll work for you?

It was bundled in this build as OP said. And more importantly, the K1200 GPU is still being supported in the latest Quadro drivers, as it was built upon Maxwell chips, rather than already abandoned Kepler arch. It remains to be unknown though, whether PhotoPrism would be glad to utilize it.

 

20 hours ago, TheDarkCanuck said:

If you want a development environment, buy a rpi3 or 4 for $50.

I doubt any code development would even happen on this build. The OP mentioned GitLab, a piece of service designed solely for source code management, where no actual developments would happen. Also, a DevOps person may develop tools on their own PC, and deploy or configure it on the server, and a Pi cannot run any x86-compiled program, adding up unnecessary hassle about cross-platform compilation.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Bersella AI said:

And more importantly, the K1200 GPU is still being supported in the latest Quadro drivers, as it was built upon Maxwell chips, rather than already abandoned Kepler arch. It remains to be unknown though, whether PhotoPrism would be glad to utilize it.

Ah, it's one of those cards. My mistake. Still it's not a very useful card by modern standards. 

 

2 hours ago, Bersella AI said:

I doubt any code development would even happen on this build. The OP mentioned GitLab, a piece of service designed solely for source code management, where no actual developments would happen. Also, a DevOps person may develop tools on their own PC, and deploy or configure it on the server, and a Pi cannot run any x86-compiled program, adding up unnecessary hassle about cross-platform compilation

Are you the OP? 

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, TheDarkCanuck said:

Are you the OP? 

I'm not the OP, but I'm aware of how important having a fluent workflow is for DevOps people. I consulted my friend, involving in DevOps, about her workflow, and in her office, all computers are based on x86. It's reasonable to utilize a Pi and cross-platform compilation only when there were any special, scarce ARM server in charge.

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/14/2024 at 9:35 AM, TheDarkCanuck said:

 

I also don't think virtualization on a quad core cpu of this era is realistic.

It’ll be more than fine. I ran my homelab on a i3 6100…. And had plenty of VM’s running. 
 

My 6100 ran ESXi, with VM’s of truenas, 3x Ubuntu server, windows LTSC, home assistant, and a handful of docker containers under one of the Ubuntu VM’s. All on 28 GB of RAM…. I did eventually upgrade to my current homelab in my signature, but that was more because I needed more PCIe and more RAM, honestly the 6100 CPU was never an issue. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LIGISTX said:

It’ll be more than fine. I ran my homelab on a i3 6100…. And had plenty of VM’s running. 
 

My 6100 ran ESXi, with VM’s of truenas, 3x Ubuntu server, windows LTSC, home assistant, and a handful of docker containers under one of the Ubuntu VM’s. All on 28 GB of RAM…. I did eventually upgrade to my current homelab in my signature, but that was more because I needed more PCIe and more RAM, honestly the 6100 CPU was never an issue. 

Curious as to how you provision the cpu to vms in this instance? Having multiple guests allocated to the same two cores seems like it would only work in a single tenant environment. Did it function at all under load? I would expect cpu ready wait times to be quite bad if any more than two of these guests were doing something. Basically, it probably does work but I would expect performance to be quite bad unless the use case were trivial.

 

I'm not thinking about homelab here because the op is talking about running services (alongside some homelab stuff which I already don't recommend - is effectively dev and prod on the same machine). So my assumption is OP will not be the only user hitting the services, hence I suspect the vms will be underprovisioned. Particularly if ci/cd is running on the same host. That can end up being quite computationally expensive, depending on what he's learning (or how bad a mistake is made). 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, TheDarkCanuck said:

Curious as to how you provision the cpu to vms in this instance? Having multiple guests allocated to the same two cores seems like it would only work in a single tenant environment. Did it function at all under load? I would expect cpu ready wait times to be quite bad if any more than two of these guests were doing something. Basically, it probably does work but I would expect performance to be quite bad unless the use case were trivial.

 

I'm not thinking about homelab here because the op is talking about running services (alongside some homelab stuff which I already don't recommend - is effectively dev and prod on the same machine). So my assumption is OP will not be the only user hitting the services, hence I suspect the vms will be underprovisioned. Particularly if ci/cd is running on the same host. That can end up being quite computationally expensive, depending on what he's learning (or how bad a mistake is made). 

The machine never had any performance issues. I provided VM’s either 1 or 2 threads, and the hypervisor was easily able to manage CPU load… I’m running relatively light services, so it isn’t a big deal. The most “load” it ever saw was Truenas doing some virtual network reads or writes (faster then gigabit, but slower then 10 gig), and some Plex transcodes. But that workload never presented an issue for it. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×