Jump to content

One or Multiple VMs

Hello,

 

Currently, I have a Nas box running Freenas 10 (I know, I'm trying to migrate away from corral, but that's another issue).  On this Nas, I have a couple of Docker Containers like Emby, Subsonic, OwnCloud, etc.  As it is not the most robust of servers in terms of hardware or software, I am looking to move the containers onto another server and keep the Nas as purely a Nas.  I think I can run most of the Containers on Debian (either as a container or a stand alone program).  I also want to run pfSense on this box and I was looking at running ProxMox to virtualize the Debian and pfSense.

 

Here is my question.  Would I be better off having one virtual Debian running all my apps or have one for each, as in a Debian for Emby, another Debian for Subsonic, and so on.  I feel that one Debian will be less resource intensive, but this is also my first attempt at something like ProxMox so idrk.

 

Also, here is a list of my probable hardware for the ProxMox box.  It will definitely be running backups to the Nas, probably have idles clones waiting on the the proxmox, and will probably run the 4 ssds in raid 0 because why tf not.

https://pcpartpicker.com/list/q74zwV

 

Like I said, this is my first stab at Virtual Servers so I will be open to any advice on the Hypervisor choice, choice of hardware, and definitely the VM config.

 

Thank you!

Link to comment
Share on other sites

Link to post
Share on other sites

If your using containers, you can use those as a container in proxmox, instead of a vm. You can put docker in a container if you want  that.

 

EDIT: for the build, no need for the network card, esp for that brice

 

Use the stock cooler

 

Don't use raid zero, use mirrored vdevs in zfs for storage.

 

No need for a flash drive, install on the ssds.

 

EDEIT2:

 

WHy not get a used r710? much cheaper, faster and much better supported and more stable.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Electronics Wizardy said:

If your using containers, you can use those as a container in proxmox, instead of a vm. You can put docker in a container if you want  that.

 

EDIT: for the build, no need for the network card, esp for that brice

 

Use the stock cooler

 

Don't use raid zero, use mirrored vdevs in zfs for storage.

 

No need for a flash drive, install on the ssds.

Will I not need at least one more network card if I am using pfSense?

 

Ok on the stock cooler and the storage

 

So I can install the OS and VMs/Containers on the same drive?  I assume I will need to atleast partition it.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, newgeneral10 said:

Will I not need at least one more network card if I am using pfSense?

vlans.

 

6 minutes ago, newgeneral10 said:

So I can install the OS and VMs/Containers on the same drive?  I assume I will need to atleast partition it.

yep, 

 

no partitions, use zfs datasets. Much beter

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Electronics Wizardy said:

vlans.

 

yep, 

 

no partitions, use zfs datasets. Much beter

shweet.  Thanks my dood

Link to comment
Share on other sites

Link to post
Share on other sites

Theoretically you should use a single VM where possible, since the OS will try to optimize CPU time and it is best to do that at as high a level as possible.

In reality I do not think it will make much of a difference if your VM host is any good. I do have no personal experience with ProxMox, but I am sure you did your research, and if it is about as efficient as KVM it really shouldn't matter much

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

no partitions, use zfs datasets. Much beter

I prefer btrfs, no one will need 128bit pointers in the next 10-20 years and btrfs performs a little better and the copy on write can be abused to do awesome things

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, ChalkChalkson said:

I prefer btrfs, no one will need 128bit pointers in the next 10-20 years and btrfs performs a little better and the copy on write can be abused to do awesome things

but btrfs is much less tested than zfs. And with zfs you can easily add drives to a pool, have caching built in and many other goodies that btrfs doesn't have.

 

Coming from a person who loves btrfs.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Electronics Wizardy said:

but btrfs is much less tested than zfs.

That is completely true, but A: high reliability for a consumer is what people creating a standard consider unacceptable and B: Jets are also less tested than props and we don't care at all :P

11 minutes ago, Electronics Wizardy said:

And with zfs you can easily add drives to a pool, have caching built in and many other goodies that btrfs doesn't have.

cache management is integrated in btrfs and adding drives to a pool makes btrfs stand out so much! Especially if you achieve redundancy via copy on write since this way you can add drives of different sizes :) 

12 minutes ago, Electronics Wizardy said:

Coming from a person who loves btrfs.

Funny, while I use unraid (and thus a disfigured version of btrfs) personally, I think ZFS is the single coolest thing in the storage world... 128bit pointers mean a system you setup today could be patched to accept hundreds of Petabytes per square mm of the earths surface. This system could run a Matrioshka brain. The RAID-Z levels are also great in a professional environment, where speeds beyond 1Gb/s matter.

But for consumers btrfs is just more sensible at the time 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, ChalkChalkson said:

cache management is integrated in btrfs and adding drives to a pool makes btrfs stand out so much! Especially if you achieve redundancy via copy on write since this way you can add drives of different sizes :) 

btrfs can't have a cache device.

 

btrfs doesnt' support pooling.

 

8 minutes ago, ChalkChalkson said:

128bit pointers mean a system you setup today could be patched to accept hundreds of Petabytes per square mm of the earths surface.

and this file system will probably never see over 1PB so no use. ZFS can make filesystems that are big enough for anything for the next 10 years probably.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

and this file system will probably never see over 1PB so no use. ZFS can make filesystems that are big enough for anything for the next 10 years probably.

ZFS is the awesomely large one, as I said hundreds of Petabytes for square mm of earths surface, btrfs could the "filled" by stacking high capacity drives in a large apartment.

2 minutes ago, Electronics Wizardy said:

btrfs can't have a cache device.

pure btrfs might not, but there are plenty of implementations of it that do

3 minutes ago, Electronics Wizardy said:

btrfs doesnt' support pooling.

but plenty of equivalent options exist

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ChalkChalkson said:

Theoretically you should use a single VM where possible, since the OS will try to optimize CPU time and it is best to do that at as high a level as possible.

more efficient kernels are better at this but, some OS's designed for this (unRAID) don't handle the virtual OS's data and allows it restricted access to the hardware

in RAM's case, the host OS sets lets say 2gb for this example; of RAM, it "partitions" the selection (i know it's not the right term, i don't care) and send info the to virtual OS saying the system has 2gb RAM when in reality it may have more

the CPU is more complicated to explain but simply the host says the system only has these two cores available

virtualisation software works like this, but they have to deal with other programs and the host OS takes care of the resource allocation, if the host OS is the software, this allows the OS to more efficiently allocate the resources

but some software (Hyper-V is the only example i know of) can talk to the OS because it's got the same code, the OS allows the software to allocate the resources itself allowing it to do itself instead of the OS doing it, allowing for more efficient resource allocation

****SORRY FOR MY ENGLISH IT'S REALLY TERRIBLE*****

Been married to my wife for 3 years now! Yay!

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, samiscool51 said:

the CPU is more complicated to explain but simply the host says the system only has these two cores available

virtualisation software works like this, but they have to deal with other programs and the host OS takes care of the resource allocation, if the host OS is the software, this allows the OS to more efficiently allocate the resources

Well if you want to go really really deep and look at an instruction level you can make better predictions, but generally the following rule of thumb cuts it:

Running an OS takes resources, so you should try to run as few as possible.

 

Would you agree?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ChalkChalkson said:

Running an OS takes resources, so you should try to run as few as possible.

Would you agree?

not necessarily

yes it takes resources but it depends on how much

arch linux (a popular DIY bare bones linux distro) requires a minimum of 128mb of RAM, but thats just the base os, want to run a web server, oh there goes 512mb, want to run a GUI there goes 256mb

it depends on what the os is doing and how much resources it needs to complete it's task/s

 

to run as few as possible is also recommended but many virtualisation servers run many virtual OS's and don't have the luxury of running as few as possible and may need to run multiple at once. an example of this is azure, they host many OS's on one system as possible, once it can't handle anymore, another server is setup to handle more.

 

our main server hosts about 8 OS's (not including the host) and each system has at least 2 cores and at most 6 cores

it depends on what you do with it, so running as few as possible may not fit many organizations or people's needs

****SORRY FOR MY ENGLISH IT'S REALLY TERRIBLE*****

Been married to my wife for 3 years now! Yay!

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, samiscool51 said:

it depends on what you do with it, so running as few as possible may not fit many organizations or people's needs

That is pretty clear IMO, but if the question is whether to setup 2 debian based systems to host 2 simple servers or just one, I think the organisational needs are pretty simple in that case. Sure there are lots of reasons why you want to split up OSs. 

But in terms of straight performance, in most cases using fewer OSs rather than more is a little better

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, ChalkChalkson said:

 but if the question is whether to setup 2 debian based systems to host 2 simple servers or just one,

debian doesn't take much resources but it could be improved by removing the extra stuff that it has and only have the essentials (no GUI, CLI is the best for a linux server as it doesn't need much resources)

if the server has the power to run two debian's then it's cheaper to virtualise

but if it can't, then two systems may be required to do your workloads

 

it's better to virtualise in many people's cases as it's free and doesn't require two physical systems

my server runs multiple virtual OS's

it's an 8 core system (yes it's overpowered but i use it for work and stuff by virtualizing OS's to see if a implementation would work or screw up our network) and it runs

windows server 2016 (host OS) has 8 cores 16 threads 16gb ram (with no virtual OS's running)

arch linux (main linux system and main work server os for non-windows stuff) 2 cores 4 GB RAM (runs with everything below except windows server 2016)

ubuntu server (sub linux system, used for testing new software for our ubuntu server servers, some are ubuntu) 2 cores 4 GB ram

windows server 2016 (virtual os, used on our work servers that handle things like windows users) 4 cores 8GB RAM (when run, all other virtual systems are turned off)

 

all of the virtual systems don't have threads because i'm lazy and can't be bothered enabling them

****SORRY FOR MY ENGLISH IT'S REALLY TERRIBLE*****

Been married to my wife for 3 years now! Yay!

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, ChalkChalkson said:

That is pretty clear IMO, but if the question is whether to setup 2 debian based systems to host 2 simple servers or just one, I think the organisational needs are pretty simple in that case. Sure there are lots of reasons why you want to split up OSs. 

But in terms of straight performance, in most cases using fewer OSs rather than more is a little better

 

If you have a newer CPU which supports second level address tables (EPT/NPT), then the overhead is negligible. 

As for memory consumption - even mainstream linux kernels will idle on 120-150MB ram, and Windows Server will run on 512MB ram (GUI overhead)

Really that is negligible when you only have a few VM's, given the trade off that you can isolate your apps/uses and install the required toolsets in each VM to provide full compatibility for those apps. 

Isolating them is also much more simple for configuration - and has the added benefit of if you do have an issue with your OS and you need to blow it away - you only have 1 app to setup again. e.g I just mucked up my SSH jumphost to my home network today - I rebuilt it from scratch and hardened it without having to touch my seedbox, development box, or webserver. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, Jarsky said:

 

If you have a newer CPU which supports second level address tables (EPT/NPT), then the overhead is negligible. 

As for memory consumption - even mainstream linux kernels will idle on 120-150MB ram, and Windows Server will run on 512MB ram (GUI overhead)

Really that is negligible when you only have a few VM's, given the trade off that you can isolate your apps/uses and install the required toolsets in each VM to provide full compatibility for those apps. 

Isolating them is also much more simple for configuration - and has the added benefit of if you do have an issue with your OS and you need to blow it away - you only have 1 app to setup again. e.g I just mucked up my SSH jumphost to my home network today - I rebuilt it from scratch and hardened it without having to touch my seedbox, development box, or webserver. 

I think you guys misunderstood me heavily... I think splitting up OSs is often the more sensible choice, but if you are completely indifferent to the convinces of either option it might be slightly better to run it in one VM

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/9/2017 at 1:46 PM, ChalkChalkson said:

Well if you want to go really really deep and look at an instruction level you can make better predictions, but generally the following rule of thumb cuts it:

Running an OS takes resources, so you should try to run as few as possible.

 

Would you agree?

Kind of depends, I prefer not to deploy mixed role servers to make deployment and maintenance easier. It also spreads the risk out,

 

But generally speaking yea don't go nuts deploying VMs/OS for no good reason.

Link to comment
Share on other sites

Link to post
Share on other sites

Definitly agree with splitting up vm's for easy configuration and risk spreading. 

 

First thing that pops in my mind when using 1 network card is all network traffic going through 1 port which may slow down network traffic.

Correct me if I'm wrong.

Be safe, don't drink and sudo

 

Laptop: ASUS K541UA (i5-6198DU, 8GB RAM, 250GB 850 EVO) OS: Debian Buster (KDE)

Desktop: i7-7700, ASUS Strix H270F, 16GB RAM, 128GB SSD from laptop, some HDD's, iGPU, some NIC's, OS: Debian Buster (KDE)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×