Jump to content

What is Virtualization Used For in Servers?

So I found out that servers use visualization. Or at least for big servers. Why is this needed.

Link to comment
Share on other sites

Link to post
Share on other sites

Virtualization allows multiple computers to be run on the same physical server, these are called Virtual Machines (VM). This is done to save cost as the majority of servers do not fully utilize the hardware resources they have so being able to run many on the same hardware is extremely cost effective.

 

This is a very basic explanation but is the core reason why it has become so popular, there are many other benefits but it's best to get a good grasp of the core concept before looking in to other areas of virtualization.

 

P.S. Don't trust LTT forum spell checker, it has no idea  :)

Edited by leadeater
Link to comment
Share on other sites

Link to post
Share on other sites

A lot of things, in my servers i use it to keep each one of my game server isolated from each other encase one gets compromised.

I know some users also off load all their networking stuff to VMs (like routers and firewalls).

HTID

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Virtualization allows multiple computers to be run on the same physical server, these are called Virtual Machines (VM). This is done to save cost as the majority of servers do not fully utilize the hardware resources they have so being able to run many on the same hardware is extremely cost effective.

 

This is a very basic explanation but this is core reason why it has become so popular, there are many other benefits but it's best to get a good grasp of the core concept being looking in to other areas of virtualization.

 

P.S. Don't trust LTT forum spell checker, it has no idea  :)

Does this allow all VMs to control the server? Or is each VM a computer that has access to data?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Appleboy45 said:

Does this allow all VMs to control the server? Or is each VM a computer that has access to data?

VMs are isolated from each other and the hardware. When you create a VM you assigned it resources from what is available to the server, it is even possible to over provision resources compared to what is actually available, very commonly done for CPU cores.

 

At work we run at about 4 to 1 for assigned virtual cores to real physical cores.

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, leadeater said:

<snip>

At work we run at about 4 to 1 for assigned virtual cores to real physical cores.

Jesus, that is very generous of your work, haha. We hammer our Dell Blades over at our work.

 

Then again I work at a school district.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, miguelr said:

Jesus, that is very generous of your work, haha. We hammer our Dell Blades over at our work.

 

Then again I work at a school district.

Yea personally I think we should run much high ratio than we do but that's not my decision. It's not like we couldn't do it either, we have 58 ESX hosts using dual E5-2690V3's so it's not like we are low on CPU horse power :P. Resource pools could also be used to ensure performance for critical services along with dedicated host clusters (which we already do). But we just stick to 4:1 ish and buy more hosts to meet that.

 

We are very good a spending/burning huge amounts of money for no real good reason >.<

Link to comment
Share on other sites

Link to post
Share on other sites

To save money. I know of a hospital that converted their data center of 250+ servers to a single 42U cabinet. I've fit over 600 VMs into a single 1U server, without virtualization I would have spent a fortune to host those 600 virtual servers and I would have gone out of business really fast.

-KuJoe

Link to comment
Share on other sites

Link to post
Share on other sites

Power benefits (more eco friendly).

Server snapshot backups make restoring machines a breeze.

Low footprint to save space.

In some cases better redundancy (like the blade scenario) where the hardware is pooled 

Link to comment
Share on other sites

Link to post
Share on other sites

On May 1, 2016 at 11:07 PM, leadeater said:

 

At work we run at about 4 to 1 for assigned virtual cores to real physical cores.

I don't know where you work, but that's some bad practice... 

ESXi SysAdmin

I have more cores/threads than you...and I use them all

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Sunshine1868 said:

I don't know where you work, but that's some bad practice... 

Most of our VMs are high compute load. What a service provider does with very high density/scale Linux services isn't what we do so this is the core reason it's low. We also have to take in to account DR load for a failover so in that situation it will be 8:1.

 

Our hosts sit at 25% CPU load continuous minimum, average around 30%-35% and peak at around 50%-60%. While I know we can run more on each host even taking DR in to account I don't have much say in this and at times not even the ITS department does, whole different thing I can't be bothered getting in to (blah bureaucracy).

 

Also at night during our backup window everything gets pushed very hard to finish in time, to the point the VMs become noticeably slower. We pull around 4-5 GB/s off our primary Netapp filers and put significant load on our network which is 40Gbps interconnected.

 

We have to guarantee a minimum level of service at all times 24/7 or we will have 35,000 students and thousands of lecturers/professors lining up to murder us.

 

vCPU to pCPU ratio isn't really the full picture so is actually a fairly bad metric to soley look at. Also 4:1 is a pretty generic rule of thumb people have been banding round for a long time as a good target to aim for, little outdated in my opinion but you see this number all over the place when researching the topic.

 

http://blogs.vmware.com/vsphere/2015/11/vcpu-to-pcpu-ratios-are-they-still-relevant.html

Link to comment
Share on other sites

Link to post
Share on other sites

@leadeater if you're high compute load you'd get better performance by going 1:1; the time splitting those procs are doing is probably cutting your performance significantly, especially if you're high compute. 

 

but I don't know your environment :P haha

ESXi SysAdmin

I have more cores/threads than you...and I use them all

Link to comment
Share on other sites

Link to post
Share on other sites

Would second over-allocating CPUs if all the VMs are going to be pegging the CPU at the same time. But if they only spike ever-so-often while others are idle then obviously you're good. I also don't know your environment and if you don't have slowdowns or issues and your config works - can't knock what's proven I suppose. Just if you ever expand and run in to issues, keep that in the back of your mind.

 

Also somebody said 600vms on a 1u server.... I'd call that insane. Even 600 internal DNS servers on a single machine seems bad.

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/1/2016 at 11:03 PM, Appleboy45 said:

Does this allow all VMs to control the server? Or is each VM a computer that has access to data?

You isolate VMs from eachother and from the hardware of the host machine, using only what you assign to it. You can even over-provision, which is pretty damn nice. So you could have a single physical server that hosts the web server, internal storage server, and DC, and have them all be isolated-ish from eachother.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, JoeyDM said:

You isolate VMs from eachother and from the hardware of the host machine, using only what you assign to it. 

Meaning, each Virtual Machine is assigned (so the virtual machine thinks it has) a certain number of CPU sockets/cores, certain amount of RAM, and certain amount of storage.. (also attached hardware)

6 minutes ago, JoeyDM said:

You can even over-provision, which is pretty damn nice.

In theory you could have 1TB of storage in the physical host, yet allocate more than 1TB to the machines (so they see more storage but aren't necessarily using it...this is called "Thin Provisioning" and it works with RAM as well). In general over provisioning is not great practice as once you hit the limit, the VMs won't know there is a cap.

 

over provisioning CPU usage IS possible, though it isn't the ideal practice; the more "compute heavy" the application is, the more time-splicing the hypervisor has to do to allocate those resources to the VMs...effectively slowing down the machine (HUGELY diminishing returns on this)

9 minutes ago, JoeyDM said:

...and have them all be isolated-ish from eachother.

Isolation is set by the Host administrator. it is possible to run a VM completely standalone with no access to vNICs or pNICs and it sits alone in a room by itself with no friends OR you can network all of your VMs together on a vSwitch that allows your VMs to communicate at bus speed. More advanced isolation techniques are in the works (vmware NSX) but that is too costly for any home user (if you think you have the money to buy it...you don't.)

ESXi SysAdmin

I have more cores/threads than you...and I use them all

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Sunshine1868 said:

 

Thank you for adding drastically more detail than I wanted to put the effort into.

 

I should clarify, I was referring to over provisioning the CPU. It isn't best-practice, but it is a possible benefit. It's pretty damn nice... In my home lab. Maybe not in a work environment. 

Link to comment
Share on other sites

Link to post
Share on other sites

@JoeyDM no worries. 

Yeah, OPing the cpu is definitely a balancing act and definitely not a production practice. It is indeed a great thing in a home lab though!

ESXi SysAdmin

I have more cores/threads than you...and I use them all

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Sunshine1868 said:

@leadeater if you're high compute load you'd get better performance by going 1:1; the time splitting those procs are doing is probably cutting your performance significantly, especially if you're high compute. 

 

but I don't know your environment :P haha

Sorry really meant higher, not high compute as in HPC. We have dedicated clusters for that stuff run separately from the ITS department. Most of our stuff is just highly loaded form general usage due to the amount of concurrent users on the network. We are primarily a Windows env so that's enough clues in itself :P. Things like SharePoint want stupid amounts of resources.

 

Running 1:1 just isn't a viable option for cost and isn't necessary. We also often don't have the power to say no you don't need 4, 6, 8 etc cores when someone asks for a VM, academic freedom blah blah just give them what they want. This is not a fight we haven't tried to win but we are here to provide services to a university and when it comes to getting backing from senior leader team ITS will never win and is extremely frustrating as you can imagine.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Sunshine1868 said:

over provisioning CPU usage IS possible, though it isn't the ideal practice; the more "compute heavy" the application is, the more time-splicing the hypervisor has to do to allocate those resources to the VMs...effectively slowing down the machine (HUGELY diminishing returns on this)

 

15 hours ago, JoeyDM said:

I should clarify, I was referring to over provisioning the CPU. It isn't best-practice, but it is a possible benefit. It's pretty damn nice... In my home lab. Maybe not in a work environment. 

 

Outside of the US over provisioning CPU cores is the set standard, there hasn't been a single place I have worked for or contracted in to that hasn't done this for core network services such as AD, DHCP, DNS, File and Print etc.

 

Whenever we speak to VMware, Nutanix, Microsoft etc they all make the same comments, US is still very reserved in their virtualization and a lot of places still have many dedicated physical servers. They even point out there is a significant difference between the East and West coasts in the same regard.

 

The APAC region was the first to really get on board with Virtualization on a wide scale and because of this we get a lot of support from the major players in the market and they use us a lot as a testing ground for new products and services. I remember way back ages ago (for IT) when I was at the VMware launch seminar of VMware GSX Server how it was totally ground breaking that you could run 4 to 10 VMs on a single server, those really are not 'the good old days' haha.

 

Over here there are 3 golden rules everyone tries to follow:

  • Only allocate cores you actually need
  • Default to thin provision storage
  • NEVER over commit ram

 

Edit: Of course some of these differences are due to industry size between countries etc. We may be considered very large in my country but on a global scale in reality we are small. How and why we do things will not fit everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, leadeater said:

 

 

Outside of the US over provisioning CPU cores is the set standard, there hasn't been a single place I have worked for or contracted in to that hasn't done this for core network services such as AD, DHCP, DNS, File and Print etc.

 

 

Something tells me this has something to do with shipping costs to NZ; I've never seen an environment where cpu over provision is the practice. I guess that's the luxury of living/working in the US where people don't mind buying the top end Xeons ? 

ESXi SysAdmin

I have more cores/threads than you...and I use them all

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Sunshine1868 said:

Something tells me this has something to do with shipping costs to NZ; I've never seen an environment where cpu over provision is the practice. I guess that's the luxury of living/working in the US where people don't mind buying the top end Xeons ? 

Yea pricing here is higher so everything is done at the smallest resourcing as possible. Also since the majority of networks are so small no single VM ever uses that much CPU so over provisioning is very attractive and doesn't get too complicated for ESX CPU time scheduling etc. We only have 4.5 million people here so nothing is truly big :P

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×