Jump to content

One Server to Rule Them All?

Biohack

What are some pros and cons or general consensus… one powerful server to run all your services or a dedicated less powerful (just right) server for each service?

Link to comment
Share on other sites

Link to post
Share on other sites

Normally one bigger server will be cheaper and more efficient, but depends on the exact use. For example I have a old 2011 server thats running about 20 vms without a issue. 

 

One big server also allows each service to burst to high levels of cpu if needed.

 

What services do you plan on using.

Link to comment
Share on other sites

Link to post
Share on other sites

You typically want one powerful server for the reasons @Electronics Wizardy mentioned. Then run the individual services either in a virtual machine or a Docker container to ensure they are isolated from one another (e.g. for security and to avoid dependency conflicts)

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

One big server is definitely easier for management, logging, and is usually less expensive to boot. Con obviously is  if that server goes down then everything goes down. One bad hardware failure means everything is offline until that can get fixed. Also configuring the server incorrectly may also affect any vms, docker containers, etc that are hosted on it. So it really depends on if you're comfortable having that one big server as a possible single point of failure.

Link to comment
Share on other sites

Link to post
Share on other sites

Basically.... exactly what everyone else said. 

 

I even went one step further and virtualized my pfsense router on my single "big boi" homelab server... why run a separate appliance if you already have a server running that can easily be your router (this.... has some downsides, virtual router can be a PITA, but so far its worked just fine for me, YMMV).

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Electronics Wizardy said:

What services do you plan on using.

I'd like to keep that private.

 

Thanks guys. Makes sense.

Link to comment
Share on other sites

Link to post
Share on other sites

Should you virtualise services even if you're running a Linux server? After all, you can create users for each service?

 

And then virtualising things limits your spike for cpu utilisation to that of the VM, defeating the purpose of one powerful server?

Link to comment
Share on other sites

Link to post
Share on other sites

I've done this cost analysis on a corporate scale and have nearly had fist fights with other engineers along with had pretty fun scream fests over at SpiceWorks which is MSP dominated.

 

I support a lot of small businesses that tend to run standalone servers that aren't virtual because there's no business need. Sometimes 3 or 4 individual servers makes more sense than virtualization, and I can argue the ROI with any CIO. Also, not everything facilitates being happy run in Azure or AWS where a glitch on your ISP side shuts your 24/7 business down.

 

Server consolidation makes sense the more guests you have. Same argument with VDI. As you scale up with more guest VMs you save money because you can cram more of them on a single box, especially if they are VM's that are lightweight or just utility stuff. You can stack those things on any modern processor and you will run out RAM typically long before you run out of CPU cycles. You can allocate resources to what the VM needs and shuffle resources to other VM's, etc.

 

The pro bare metal argument is limited, but it has it's case. Hardware turnover is the biggest advantage with smaller, bare metal servers. With a bigger, beefier server it takes much longer to get your ROI out of it, and when you upgrade that box it's much more expensive. I can get a 12100 based desktop for $500 if I hit the outlet sites, shove 2019 on it with a DC grade SSD, and be happy. Try to get that horsepower in a server package and you will be paying lots of cash. Lets say I have only one or two VMs that really need a CPU upgrade, and if they are virtual I have a complicated formula so solve. Bigger, beefier boxes are more expensive to upgrade. Smaller, cheaper boxes are easier to upgrade, and are cheaper to customize to the needs of a VM. I have a Blue Iris server that's in need of an upgrade. Why would I waste money carving a chunk out of a Server stack and SAN when I can throw it on a cheap workstation with some 10TB spinners in it and do it for a fraction the cost. 

 

Yes, exporting VM's or pulling them from a Veeam backup is easier to do when it comes to migrating VM's than backup and restore from bare metal backups. But...I can also take a SSD from a bare metal server that's blown a power supply and plug it into pretty much any equivelant workstation, boot it, and have the server up in 15 minutes.

 

Bigger, consolidated servers running many VM's also present a problem with power backup. With a single, small, bare metal server I can plug a cheap smart UPS into it and it will calmly shutdown the server in the event there's power outage. With a multiple VM host it's aint so easy. You need  UPS with a web card in it an configure each VM to talk to the UPS in the event batteries are running. Sorry, but host centric utilities to do this don't work in my book be it VMware or Hyper-V. This is why I see even small VMware hosts in small business have comically large battery backups. Getting all those guest VM's to shut down nicely when your UPS is on 5% batteries is still something that isn't turnkey. Yanking data stores away from running SQL or file servers because your battery backup ran out is also not cool.

 

The fault tolerance argument also doesn't work in my book. If you have 4 hosts connected to a single SAN or data storage appliance you've just moved your point of failure to storage. I've had 5x as many SAN failues as hosts, and these are $100k plus storage networks. Core switches tossing their configs because of a buggy firmware update, tresspassed controllers getting a bug check, etc., etc. Yet those 15yr old Dell poweredge towers will run until the earth falls into the sun.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks wseaton, some interesting points raised that I wasn't aware of.

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/27/2022 at 8:54 PM, Biohack said:

Should you virtualise services even if you're running a Linux server? After all, you can create users for each service?

 

And then virtualising things limits your spike for cpu utilisation to that of the VM, defeating the purpose of one powerful server?

If you have services that have large CPU demand, dedicate more vCPU’s to that VM… is this for home use or commercial use?

 

In a home setup, there is almost no chance this will be an issue. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, LIGISTX said:

In a home setup, there is almost no chance this will be an issue. 

Commercial, but small number of users.

19 hours ago, LIGISTX said:

If you have services that have large CPU demand, dedicate more vCPU’s to that VM… is this for home use or commercial use?

But I still can't help but wonder whether segregation on a per user basis is more optimal over creating VMs? VMs seem to create more problems than they solve?


Been thinking about this and I think I figured it out, you use VMs because you can have just one physical server for everything even if it requires unique OS instances or different OS. I was just stuck since all my services can run on Linux so I couldn't imagine needing VMs. But if you relax that assumption, VMs start to make sense.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Biohack said:

Been thinking about this and I think I figured it out, you use VMs because you can have just one physical server for everything even if it requires unique OS instances or different OS.

thats the gist of it. 

 

Virtual Machines allow for:

  • Using templates to automatically and easily deploy machines
  • Easily migrate servers between hardware (as theyre abstracted from the hardware layer mostly)
  • Easily manage machine level backups
  • Create "snapshots" of a system to make a change, and roll back instantly in the event of a problem
  • Dynamically allocate resources based on what is available to the host, and balance resource between multiple server performing various tasks. 
  • Can create secure virtual "apps" of virtual machines to isolate applications to their own network for security reasons thanks to security policies on the top tier hypervisors. 

In larger environments which utilise external storage (SAN/NAS with iSCSi etc...), it also gives you redundancy. 

In the event of a compute host falling over, the VM can automatically be moved to another compute host. 

 

And the list goes on of advantages. 

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/28/2022 at 4:54 AM, LIGISTX said:

I even went one step further and virtualized my pfsense router on my single "big boi" homelab server... why run a separate appliance if you already have a server running that can easily be your router (this.... has some downsides, virtual router can be a PITA, but so far its worked just fine for me, YMMV).

Researching VM based routers now. I didn’t even know this was possible!

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Biohack said:

Researching VM based rioters now. I didn’t even know this was possible!

Anything can be made virtual…. Putting your network appliances in VM’s is typically a bad idea, but for homelab use it can make sense. For enterprise…… it doesn’t make sense. 
 

VM’s just provide you with segregation, and provide you with potentially failover and HA. Obviously if the box goes down, everything goes down, but you could use VM’s to for such a purpose. The big one is just split up services so if one thing takes a poo and downs the OS, or starts sucking back massive CPU cycles, that issue can only grow so large and other VM’s should remaining functioning just fine. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/28/2022 at 2:00 PM, wseaton said:

Server consolidation makes sense the more guests you have. Same argument with VDI.

Leaving for personal future reference.

 

A guest operating system is the operating system installed on either a virtual machine (VM) or partitioned disk.”

 

VDI - Virtual Desktop Infrastructure.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

Hi everyone,

I'm kinda new to this things, servers and all that, but I have a question regarding the hardware needed to run VDI for 40 some users (coding) and is there any advice you would give to starters like me? 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×