Jump to content

Best way, home server Plex, VM (docker / proxmox ?)

Hello,

 

I currently have a home server with a Ryzen 9 5900X, that runs Debian, with a Plex server, it has 2 network shares mapped to my NAS, and I also installed the Unifi controller for an AP to extended my Wifi.

Problem is : since I installed it, I kinda tried multiples stuff on it as well, like having docker, or virtualization. So the install might not be as clean as it should be 😞

 

I'm about to change hardware on my current PC. Which runs a Ryzen 9 3900X, going back to intel, 13700k.

So I was thinking, to redo a clean install of my home server on my old hardware (R9 3900X) and then redo it on the current server (R9 5900X). (to limit downtime)

 

What I need is Plex, Unifi controller and I think a VM system so I can try a few things from time to time without installing them on the main OS.

I thought of putting back Debian (or Ubuntu ?) installing plex, because I'd rather have full power and not a virtualised one.

And then, using kvm / proxmox or going with docker ? I need a GUI in remote access for the VM, so I can manage them when I'm not on site.

Apparently Unifi controller can be used in docker so I guess, one point for docker ?

And also docker seems easier to setup with only one physical network card, i'd need the vm to be in the same network to be accessed easly..

 

Sorry for my english, I'm french, and thanks for the help

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Aika64 said:

Hello,

 

I currently have a home server with a Ryzen 9 5900X, that runs Debian, with a Plex server, it has 2 network shares mapped to my NAS, and I also installed the Unifi controller for an AP to extended my Wifi.

Problem is : since I installed it, I kinda tried multiples stuff on it as well, like having docker, or virtualization. So the install might not be as clean as it should be 😞

 

I'm about to change hardware on my current PC. Which runs a Ryzen 9 3900X, going back to intel, 13700k.

So I was thinking, to redo a clean install of my home server on my old hardware (R9 3900X) and then redo it on the current server (R9 5900X). (to limit downtime)

 

What I need is Plex, Unifi controller and I think a VM system so I can try a few things from time to time without installing them on the main OS.

I thought of putting back Debian (or Ubuntu ?) installing plex, because I'd rather have full power and not a virtualised one.

And then, using kvm / proxmox or going with docker ? I need a GUI in remote access for the VM, so I can manage them when I'm not on site.

Apparently Unifi controller can be used in docker so I guess, one point for docker ?

And also docker seems easier to setup with only one physical network card, i'd need the vm to be in the same network to be accessed easly..

 

Sorry for my english, I'm french, and thanks for the help

I would use proxmox is your hypervisor and then run everything in a VM under it. That’s what I do, and it’s great. 
 

I have truenas, multiple Ubuntu’s, home assistant, windows, etc all as VM’s. In one of the Ubuntu server VM’s I have docker containers.

 

Also, you really don’t need a 5900x for this… if all your doing is Plex and a few docker containers, you have way more hardware then you need. My homelab used to run on an i3 6100, with all of the same VM’s I just listed. One of my Ubuntu VM’s hosts plex, and I had no problem doing all of this on that i3. I just needed more RAM which necessitated a switch to my current xeon build in my signature.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

I wanted to use proxmox in the earlier days but didn't, don't remember why ^^".

I have a 5900x because my plex has multiples clients, some need transcoding, sometimes from 4k, or multiples at once. I'm at peace with it. Some nights i go up to 50% usage so it's necessary. And the first server was my old i7-3770, i upgraded to a motherboard with 2.5 Gig link inside, at the time, ryzen had the perfect setup, cores and lan 😛

 

I could go with ubutnu and then docker then ? Stupid question, can I run ubuntu or debian image inside a docker container ? or is it made for "application virtualisation" more ?

Do you run home assistant in docker or the os on proxmox ? Now that you mention this, I want to run HA as well..

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, Aika64 said:

I could go with ubutnu and then docker then ? Stupid question, can I run ubuntu or debian image inside a docker container ? or is it made for "application virtualisation" more ?

Do you run home assistant in docker or the os on proxmox ? Now that you mention this, I want to run HA as well..

I think maybe you are mixing a mashing some concepts here.

 

You would not run a full distro in a container (like docker)

 

The hierarchy goes OS (Debian) -> Container platform (Docker) -> container (Plex)

 

You can run a bunch of stuff in docker without a VM. The docker container is a namespace within the kernel that separates it, it is not running in a virtual environment, it is just using the resources it needs from the parent kernel.

 

the benefit to running things in a container is you get sufficient isolation, but you are still in the same effective environment. This means you can for example, easily map directories between containers and do complex networking (this is good for plex since you may have other software that acts on your media library.)

The other benefit is the ease of deployment, installing images is easy, and removing them is also easy.

 

a VM will (almost) entirely isolate the software, this can be good for other reasons, but it adds a lot of overhead as you have an entire kernel running in the VM doing all of the kernel stuff. You also lose some dynamicness with a VM since you end up sequestering disk space, CPU cores, and memory in a more static way.

 

 

With a good docker setup you have well segmented software that you can choose how it interacts with other software, as well as easy backups (docker img and the images data) that you can easily redeploy to other systems.

 

with a VM you will end up copying the entire image if you need to redeploy which can be a bit of a hassle.

 

That being said, plex does not need many resources unless you are planning on doing a lot of transcoding, so you can easily run it in docker or a vm.

 

 

 

 

 

If your question is answered, mark it so.  | It's probably just coil whine, and it is probably just fine |   LTT Movie Club!

Read the docs. If they don't exist, write them. | Professional Thread Derailer

Desktop: i7-8700K, RTX 2080, 16G 3200Mhz, EndeavourOS(host), win10 (VFIO), Fedora(VFIO)

Server: ryzen 9 5900x, GTX 970, 64G 3200Mhz, Unraid.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Aika64 said:

I wanted to use proxmox in the earlier days but didn't, don't remember why ^^".

I have a 5900x because my plex has multiples clients, some need transcoding, sometimes from 4k, or multiples at once. I'm at peace with it. Some nights i go up to 50% usage so it's necessary. And the first server was my old i7-3770, i upgraded to a motherboard with 2.5 Gig link inside, at the time, ryzen had the perfect setup, cores and lan 😛

 

I could go with ubutnu and then docker then ? Stupid question, can I run ubuntu or debian image inside a docker container ? or is it made for "application virtualisation" more ?

Do you run home assistant in docker or the os on proxmox ? Now that you mention this, I want to run HA as well..

As said above, they are good for different purposes.

 

I run Plex in a full fat Ubuntu VM, but if I was to start from scratch I’d likely run it inside docker on my “docker host” VM.  I run HA as the full fat OS as a VM, and I have a few other full on VM’s as well. I also use some LXC containers which are “sort of like docker containers” under proxmox - they also share the parent kernel similarly to docker, it just removes an extra layer of virtualization compared to having docker containers within a VM. But not all things work nicely in LXC’s, and sometime docker containers are the simple answer which is why I do have ~8 docker containers running inside of my “docker host” Ubuntu server VM. 
 

The reason I advise proxmox is because of the flexibility. You can do more or less whatever you want, however you want. You can accomplish the same thing in Ubuntu (or Debian) is it can run KVM just like proxmox, and you can use LXC containers in it as well like proxmox, but… proxmox is built to be a hypervisor appliance, so I use it since that’s what it’s good at. Then I use truenas virtualized as my NAS appliance since that’s very good at being a NAS, and use Ubuntu server VM’s to host applications and docker containers, since… Ubuntu is pretty good at that. 
 

There are literally practically an infinite amount of ways to do this, but I’d like to think my logic is pretty sound in supporting the way I do it myself. Not to say other ways are less logically sound, but I’d think through your use case, and pick what seems best for you. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Takumidesh said:

You can run a bunch of stuff in docker without a VM. The docker container is a namespace within the kernel that separates it, it is not running in a virtual environment, it is just using the resources it needs from the parent kernel.

[...]

That being said, plex does not need many resources unless you are planning on doing a lot of transcoding, so you can easily run it in docker or a vm.

 

 

 

 

 

Alright, so if for instance plex needs 4 or 6 threads at a specific time, then if available for use, the container will use thoses core if I understand correctly. But if for idle, it only need 1 or 2 then if won't use more.

I think I get it, thanks !

13 hours ago, LIGISTX said:

As said above, they are good for different purposes.

 

I run Plex in a full fat Ubuntu VM, but if I was to start from scratch I’d likely run it inside docker on my “docker host” VM.  I run HA as the full fat OS as a VM, and I have a few other full on VM’s as well. I also use some LXC containers which are “sort of like docker containers” under proxmox - they also share the parent kernel similarly to docker, it just removes an extra layer of virtualization compared to having docker containers within a VM. But not all things work nicely in LXC’s, and sometime docker containers are the simple answer which is why I do have ~8 docker containers running inside of my “docker host” Ubuntu server VM. 
 

The reason I advise proxmox is because of the flexibility. You can do more or less whatever you want, however you want. You can accomplish the same thing in Ubuntu (or Debian) is it can run KVM just like proxmox, and you can use LXC containers in it as well like proxmox, but… proxmox is built to be a hypervisor appliance, so I use it since that’s what it’s good at. Then I use truenas virtualized as my NAS appliance since that’s very good at being a NAS, and use Ubuntu server VM’s to host applications and docker containers, since… Ubuntu is pretty good at that. 
 

There are literally practically an infinite amount of ways to do this, but I’d like to think my logic is pretty sound in supporting the way I do it myself. Not to say other ways are less logically sound, but I’d think through your use case, and pick what seems best for you. 

I will try this, take a second look at proxmox.

And install ubuntu with docker, as you did and then a full HA image. And give it a shot.

Do I need some specific hardware or you bridged the network card ?

I believe the webGui is accessible from the internet if the right ports are opened?

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Aika64 said:

Do I need some specific hardware or you bridged the network card ?

I believe the webGui is accessible from the internet if the right ports are opened?

Bridge the network ports for… what? What exactly are you asking? If you are asking how does the VM get networking, then yes, the hypervisor handles that, and it does it via virtual NIC’s.

 

Never expose webUI’s to the internet… they are not designed to be public facing. This is a great way to get your system hacked. If you want to access things things while your away from your LAN, set up a VPN like WireGuard and tunnel into your network. This is much, much, MUCH more secure, and you will be able to use everything as if you were on your LAN. All network shares, webUI’s, SSH, etc will work as if you were at home. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Aika64 said:

Alright, so if for instance plex needs 4 or 6 threads at a specific time, then if available for use, the container will use thoses core if I understand correctly. But if for idle, it only need 1 or 2 then if won't use more.

I think I get it, thanks !

There are some nuances with containers, but for the most part yes.

 

You may need to do some fiddling to get a gpu to be seen in the container but nothing to difficult.

If your question is answered, mark it so.  | It's probably just coil whine, and it is probably just fine |   LTT Movie Club!

Read the docs. If they don't exist, write them. | Professional Thread Derailer

Desktop: i7-8700K, RTX 2080, 16G 3200Mhz, EndeavourOS(host), win10 (VFIO), Fedora(VFIO)

Server: ryzen 9 5900x, GTX 970, 64G 3200Mhz, Unraid.

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Aika64 said:

Alright, so if for instance plex needs 4 or 6 threads at a specific time, then if available for use, the container will use thoses core if I understand correctly. But if for idle, it only need 1 or 2 then if won't use more.

I think I get it, thanks !

To add to this, a VM would be similar. Containers theoretically can use 100% or the cpu (or vcpu) that the host has at its disposal. A VM can only use up to the amount of vcpu’s assigned to it. But neither would use more then it needs for whatever it’s trying to do. 
 

If you have a 8 thread CPU and assign a VM 4 threads, the most it can use is 100% of 4 threads, or…. 50% of the total CPU. But if it’s just sitting there doing nothing, it will use effectively nothing. I ran my homelab on a 4 thread (i3 6100, 2 cores + HT) with 6+ VM’s, multiple of them having 2 threads assigned to them. But none of them ALL use 100% of their assigned maximum at the same time, so the total CPU load was rarely any higher then 25% even tho I had over allocated threads by a large margin. But on top of that, any good hypervisor is going to dynamically shift and allocate resources as different VM’s demand them, within the maximum set ranges. 
 

Docker containers are different. They have the ability to use the full resources of their host, and I am not entirely sure what the docker management service does as far as dynamically assigning resources - I am just not familiar with it enough as I have never actually looked into it. None of my containers are resource hungry, so I never needed to care. I give my Ubuntu VM that handles my docker containers 6 threads and 4 GB of RAM, and it’s plenty happy. I used to give it 2 threads and 2.7 GB or RAM and it was happy, but that was on my old i3 and 28 GB of RAM…. Resources were preeeeety tight. Now that I have 28 threads to play with and 64GB or RAM in my homelab, I beefed up all VM’s a bit “because I could”. But point is…. Most containers can run on very little host resources, seeing as I used to run 6-8 containers on 2 threads and 2.7 GB of RAM, and that’s including the Ubuntu host running them…. This will vary drastically based on what your containers are and how busy/active they are. But the idea and terminology holds regardless of your setup 🙂

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LIGISTX said:

To add to this, a VM would be similar. Containers theoretically can use 100% or the cpu (or vcpu) that the host has at its disposal. A VM can only use up to the amount of vcpu’s assigned to it. But neither would use more then it needs for whatever it’s trying to do. 
 

If you have a 8 thread CPU and assign a VM 4 threads, the most it can use is 100% of 4 threads, or…. 50% of the total CPU. But if it’s just sitting there doing nothing, it will use effectively nothing. I ran my homelab on a 4 thread (i3 6100, 2 cores + HT) with 6+ VM’s, multiple of them having 2 threads assigned to them. But none of them ALL use 100% of their assigned maximum at the same time, so the total CPU load was rarely any higher then 25% even tho I had over allocated threads by a large margin. But on top of that, any good hypervisor is going to dynamically shift and allocate resources as different VM’s demand them, within the maximum set ranges. 
 

Docker containers are different. They have the ability to use the full resources of their host, and I am not entirely sure what the docker management service does as far as dynamically assigning resources - I am just not familiar with it enough as I have never actually looked into it. None of my containers are resource hungry, so I never needed to care. I give my Ubuntu VM that handles my docker containers 6 threads and 4 GB of RAM, and it’s plenty happy. I used to give it 2 threads and 2.7 GB or RAM and it was happy, but that was on my old i3 and 28 GB of RAM…. Resources were preeeeety tight. Now that I have 28 threads to play with and 64GB or RAM in my homelab, I beefed up all VM’s a bit “because I could”. But point is…. Most containers can run on very little host resources, seeing as I used to run 6-8 containers on 2 threads and 2.7 GB of RAM, and that’s including the Ubuntu host running them…. This will vary drastically based on what your containers are and how busy/active they are. But the idea and terminology holds regardless of your setup 🙂

There is a core difference though, a vm runs the entire kernel, so even if the resources are low it is still using more, this can be basically negligible if using modern hardware and a lightweight os, but if you don't set it up correctly you can end up with 'wasted' cycles.

 

it can also get complicated with CPU pinning and passing through hardware (like a gpu) can become more complicated.

 

A container uses the same kernel as the host. (so the linux kernel in most cases) though for all intents and purposes it is its own full distribution with its own userspace. (you will see a lot of containers running alpine linux because it is light weight.)

 

I think this picture can help understand the core difference.

EUXeGou.png.47f7753344d9ee11d179bfe7f54ee472.png

The userspace of a distro can potentially be very small if you don't need the kernel.

For example, the standard container for nginx is 142MB, but the alpine based container is around 13MB.

 

As should be obvious from this graph, a container offers less isolation than a VM, but it makes it much easier to do a lot of things. Though typically in a container the easiest way to do things is with containerized networking, for example if you have a db and an application, the application connects to the DB just like it normally would, with a socket.

 

 

If your question is answered, mark it so.  | It's probably just coil whine, and it is probably just fine |   LTT Movie Club!

Read the docs. If they don't exist, write them. | Professional Thread Derailer

Desktop: i7-8700K, RTX 2080, 16G 3200Mhz, EndeavourOS(host), win10 (VFIO), Fedora(VFIO)

Server: ryzen 9 5900x, GTX 970, 64G 3200Mhz, Unraid.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Do a fresh install of Ubuntu 22.04.1 LTS

 

Remove the Ubuntu Docker and install Docker CE

 

sudo apt-get remove -y docker docker-engine docker.io containerd runc
sudo apt-get install -y ca-certificates curl gnupg lsb-release
sudo apt-get update && sudo apt-get upgrade

sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --batch --yes  --dearmor -o /etc/apt/trusted.gpg.d/docker.gpg

sudo echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/trusted.gpg.d/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-compose


 

Give  your user docker permissions

sudo usermod -a -G docker <username>

 

Log off and log back on

 

 

Optionally install Portainer to manage the containers through an easy WebUI

docker volume create portainer_data
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_dat            a:/data portainer/portainer-ce:latest

 

Then login via https://<hostname>:9443

 

 

 

For  a Plex container

 

Use /dev/shm (shared memory) for a transcoding path. It is essentially a ram disk on your host.

And then assign your GPU to the docker container. 

 

First install your GPU drivers and dependencies. If its Nvidia then something like below

sudo apt-get install -y nvidia-headless-no-dkms-525 libnvidia-encode-525 nvidia-utils-525 nvidia-docker2

Then test the driver is working using:

nvidia-smi

 

Then create your Plex container. Heres a docker-compose.yml example for Nvidia:

version: '3.3'
services:
    plex:
        container_name: plex
        image: lscr.io/linuxserver/plex:latest
        network_mode: host
        environment:
            - PUID=1000
            - PGID=1000
            - UMASK_SET=022
            - NVIDIA_VISIBLE_DEVICES=all
        volumes:
            - '/opt/plex/config:/config'
            - '/dev/shm:/transcode'
            - '/mnt/media:/media'
        restart: unless-stopped

 

Then use docker-compose to bring it up

docker-compose up -d

 

 

For UniFi Controller

 

First create a network that you want UniFi to be on.

Its a good idea to create a seperate network for general docker containers (like to do with Plex etc...) and containers to do with network etc...

Note: We did not create a bridge network for Plex because its recommended to leave it on the Host network. 

 

The easiest way to create networks is in Portainer.

Something like this (specifying the name of the network, the driver, the subnet and the gateway):


image.thumb.png.f5af166b25b47fd814cfd7c7c135b8b2.png

 

 

Then create a docker-compose file like below:

version: "2.1"
services:
  unifi-controller:
    image: lscr.io/linuxserver/unifi-controller
    container_name: unifi-controller
    hostname: <hostname of your server>
    environment:
      - PUID=1000
      - PGID=1000
      - MEM_LIMIT=4096 #optional
      - MEM_STARTUP=2048 #optional
    volumes:
      - /opt/unifi-controller/config:/config
    ports:
      - 3478:3478/udp
      - 10001:10001/udp
      - 8080:8080
      - 8443:8443
      - 1900:1900/udp #optional
      - 8843:8843 #optional
      - 8880:8880 #optional
      - 6789:6789 #optional
      - 5514:5514/udp #optional
    restart: always
networks:
 default:
   external:
      name: myNetwork

 

Once you've saved your docker-compose.yml file, same thing to start it up:
 

docker-compose up -d

 

 

 

 

Once your containers are up then you can manage them through Portainer easily. 

You can use something like WatchTower to upgrade them and any other containers, automatically. It will watch dockerhub and if the package is updated, it will re-create your container automatically. 

 

P.S I typically put the docker-compose file in its respective /opt directory and run them from there

for example Plex it would be /opt/plex/docker-compose.yml 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LIGISTX said:

To add to this, a VM would be similar. Containers theoretically can use 100% or the cpu (or vcpu) that the host has at its disposal. A VM can only use up to the amount of vcpu’s assigned to it. But neither would use more then it needs for whatever it’s trying to do. 
 

If you have a 8 thread CPU and assign a VM 4 threads, the most it can use is 100% of 4 threads, or…. 50% of the total CPU. But if it’s just sitting there doing nothing, it will use effectively nothing. I ran my homelab on a 4 thread (i3 6100, 2 cores + HT) with 6+ VM’s, multiple of them having 2 threads assigned to them. But none of them ALL use 100% of their assigned maximum at the same time, so the total CPU load was rarely any higher then 25% even tho I had over allocated threads by a large margin. But on top of that, any good hypervisor is going to dynamically shift and allocate resources as different VM’s demand them, within the maximum set ranges. 
 

Docker containers are different. They have the ability to use the full resources of their host, and I am not entirely sure what the docker management service does as far as dynamically assigning resources - I am just not familiar with it enough as I have never actually looked into it. None of my containers are resource hungry, so I never needed to care. I give my Ubuntu VM that handles my docker containers 6 threads and 4 GB of RAM, and it’s plenty happy. I used to give it 2 threads and 2.7 GB or RAM and it was happy, but that was on my old i3 and 28 GB of RAM…. Resources were preeeeety tight. Now that I have 28 threads to play with and 64GB or RAM in my homelab, I beefed up all VM’s a bit “because I could”. But point is…. Most containers can run on very little host resources, seeing as I used to run 6-8 containers on 2 threads and 2.7 GB of RAM, and that’s including the Ubuntu host running them…. This will vary drastically based on what your containers are and how busy/active they are. But the idea and terminology holds regardless of your setup 🙂

 

Only by default, you can configure Docker containers (and Kubernetes etc...) with resource reservations and limits the same as a VM, just not in the 'bare bones' config, so most general users dont do this for home since their containers are pretty light

image.thumb.png.844f513ea475549c8e35f946c5991893.png

 

Or with Docker Compose it could be something like:

services:
  service:
    image: lscr.io/linuxserver/plex:latest
    deploy:
        resources:
            limits:
              cpus: 4
              memory: 4096M
            reservations:
              cpus: 2
              memory: 1024M

 

Or with Docker:

 docker run --cpus=4 --cpus-reservation=2 --memory=4096M --memory-reservation=1024M lscr.io/linuxserver/plex:latest

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×