Jump to content

General Virtualization Discussion Thread

10 minutes ago, 2FA said:

I could not get it working on Arch which still has 19.30 in AUR, I've seen possible mention of working with 19.50 but not sure on that. It works in Windows now that Core22 is out and 20.xx drivers.

 

EDIT: Found this so it's in beta support on Linux https://foldingforum.org/viewtopic.php?t=31710&start=90

Awesome thanks, might start looking at GPUs now.

Link to comment
Share on other sites

Link to post
Share on other sites

Hey all, thought id join the thread. Nothing to add on current discussions yet. Currently run vSphere & vCenter Server for my home lab. Hopefully I can contribute and learn a thing or two along the way.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, DogKnight said:

Hey all, thought id join the thread. Nothing to add on current discussions yet. Currently run vSphere & vCenter Server for my home lab. Hopefully I can contribute and learn a thing or two along the way.

Welcome to the thread. :D

 

unrelated:

I don't understand why people today are continuing to buy socket 1366 servers when the 2nd hand market for LGA2011 has been really cheap for a number of years now. I'm debating with myself to pick up a pair of E5-2690's for the VM server and 8x32GB DIMMs. This would free up the current 2670's and 128GB of UDIMM memory for other rigs I could use them on but I'm worried about hardware compatibility between the RAM and motherboard and the CPUs as they have to have a Stepping of C2 for hardware pass-through.

 

I also wouldn't mind picking up a second-hand bare-bones server just so I can give VMware software a go.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Windows7ge said:

Welcome to the thread. :D

 

unrelated:

I don't understand why people today are continuing to buy socket 1366 servers when the 2nd hand market for LGA2011 has been really cheap for a number of years now. I'm debating with myself to pick up a pair of E5-2690's for the VM server and 8x32GB DIMMs. This would free up the current 2670's and 128GB of UDIMM memory for other rigs I could use them on but I'm worried about hardware compatibility between the RAM and motherboard and the CPUs as they have to have a Stepping of C2 for hardware pass-through.

 

I also wouldn't mind picking up a second-hand bare-bones server just so I can give VMware software a go.

People just see sticker price, but in reality, 1366 based servers are more expensive overall because of power costs. I'm personally into low powered equipment since I don't have anything doing serious computation other than the very occasional media transcode and a lot less heat output.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, 2FA said:

People just see sticker price, but in reality, 1366 based servers are more expensive overall because of power costs. I'm personally into low powered equipment since I don't have anything doing serious computation other than the very occasional media transcode and a lot less heat output.

Since I got pulled into BOINC during the BOINC Pentathlon last May I've just thrown all my extra CPU power at World Community Grid. I'm very interested in testing out a new bunkering technique I have in mind.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Windows7ge said:

I don't understand why people today are continuing to buy socket 1366 servers when the 2nd hand market for LGA2011 has been really cheap for a number of years now.

Well some of us tight asses find ~$6-$20 cheaper than ~$50-$150 ?

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, leadeater said:

Well some of us tight asses find ~$6-$20 cheaper than ~$50-$150 ?

If you run the equipment 24/7, chances are you will save more money with newer hardware after about 3 years. That is only counting idle power usage and super cheap electricity cost, the gap widens if you live somewhere with high electricity costs. It's like getting a 7 year car loan, yeah sure the payment is lower but you're paying a lot more in the end.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Well some of us tight asses find ~$6-$20 cheaper than ~$50-$150 ?

The thing is I could make the argument that this is the natural progression of technology. So many years ago someone could look at me or you and say "why should I pay 50-150 for 1366 when I can get 775 for a lot cheaper." Well back when 1366 was 50-150 (the better deal) and 2011 was well in excess of that, now 2011 is 50-150. See where I'm coming from?

 

I mean look at this:

Spoiler

1407267473_Screenshotfrom2020-03-0820-08-38.png.c8c7a24dff4ff70ac54b6c980f0b34fd.png

An 8C/16T CPU that boosts up to 3.8GHz (modest boost clock for the time) cost around $2100 back in 2012.

 

Now on eBay:

Spoiler

69794446_Screenshotfrom2020-03-0820-10-51.png.1fb30430684f837bb425c76e7bd2f399.png

$65~80. Where are you going to find an 8C/16T CPU with a 3.8GHz boost clock that can be scaled up to 2 CPUs with up to 384GB of RAM per CPU for less than $65~80?

 

If you need compute power at home for A LOT of virtualization for cheap LGA2011 is where it's at. :D

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Windows7ge said:

The thing is I could make the argument that this is the natural progression of technology. So many years ago someone could look at me or you and say "why should I pay 50-150 for 1366 when I can get 775 for a lot cheaper." Well back when 1366 was 50-150 (the better deal) and 2011 was well in excess of that, now 2011 is 50-150. See where I'm coming from?

Oh sure it's not like I'm suggesting one is a better buy than the other, but when you're getting L5630's for $6 each with free shipping it's actually a harder sell than suggesting to spend $100 instead of $50 etc than $50 instead of $6, perception problem with $6 being so cheap. No real logic to it other than the pricing being so darn low.

 

On the other end of the scale I brought a IBM x3500 M4 brand new and went with all the extra options since if you're spending that much might as well go with the high performance fan kit and upgraded PSUs etc.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, 2FA said:

If you run the equipment 24/7, chances are you will save more money with newer hardware after about 3 years. That is only counting idle power usage and super cheap electricity cost, the gap widens if you live somewhere with high electricity costs. It's like getting a 7 year car loan, yeah sure the payment is lower but you're paying a lot more in the end.

My LGA1366 systems and LGA2011 actually use near as much the same power, it shouldn't but then ESXi power management isn't exactly "good". NZD power is 0.2571/kwh so not that cheap, also why I put solar in.

 

One of the factors for myself is a lot of the things I do require more than one server, deployment restriction of the product itself, so I'd rather just buy the cheapest thing possible times 3. Something costing me more in the long run isn't much of a problem as I'm already stupid enough to be buying this stuff in the first place and a lot of it so I'll run out of rack space and rack power before TCO tips towards the newer thing being cheaper. I'll be having to rip things out and upgrade anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, leadeater said:

Oh sure it's not like I'm suggesting one is a better buy than the other, but when you're getting L5630's for $6 each with free shipping it's actually a harder sell than suggesting to spend $100 instead of $50 etc than $50 instead of $6, perception problem with $6 being so cheap. No real logic to it other than the pricing be so darn low.

For the people who want more than 4C/8T in a single package though with room to scale.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Windows7ge said:

For the people who want more than 4C/8T in a single package though with room to scale.

You can get 6 cores in LGA1366 but it actually doesn't make a great deal of difference if you're not really using the CPU power computationally and just need a semi decent amount of cores and threads to run up VMs and not just hit vCPU scheduling walls. Almost all my VMs do nothing e.g. Domain Controller. It's why I use L5630 and not E56XX or X56XX because low clocks and low power works fine for that.

 

It's those LGA2011v2 that start to really peak my interest as you're getting in to that 10 core region but those still need to half in current value, nice thing is you can buy in to LGA2011 with a lower end part then if you need or want jump to a E5-2690v2 without losing out much on that first set of CPUs.

 

Majority of my stuff is idle/low utilization but there's lots of it and 8c/16t in a host is that nice point where you don't have to worry much about number of VMs you put on it, for the most part things will function even if you're 1:8 pCPU:vCPU which is not a good idea typically.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

You can get 6 cores in LGA1366 but it actually doesn't make a great deal of difference if you're not really using the CPU power computationally and just need a semi decent amount of cores and threads to run up VMs and not just hit vCPU scheduling walls. Almost all my VMs do nothing e.g. Domain Controller. It's why I use L5630 and not E56XX or X56XX because low clocks and low power works fine for that.

 

It's those LGA2011v2 that start to really peak my interest as you're getting in to that 10 core region but those still need to half in current value, nice thing is you can buy in to LGA2011 with a lower end part then if you need or want jump to a E5-2690v2 without losing out much on that first set of CPUs.

 

Majority of my stuff is idle/low utilization but there's lots of it and 8c/16t in a host is that nice point where you don't have to worry much about number of VMs you put on it, for the most part things will function even if you're 1:8 pCPU:vCPU which is not a good idea typically.

Or you can do what I do and just throw all the extra CPU at WCG :D I'm only pulling 1kW from the wall continuously not counting the two rigs in the garage or my desktop.

 

I think I'm going to hold-off on upgrading the Dual LGA-2011 server till the whole Coronavirus thing blows over. I don't need to buy 2nd hand gear from a sick guy.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

Hi Guys,

 

I hope this is the right place to ask I would really appreciate some advice, never built a server before.

I am trying to spec out a virtual workstation server that will serve 50 concurrent users for CAD, FEA and CFD workflows, as well as other engineering software.

This is for student use in a university environment.

 

Would anyone here be able to advise me on the following spec:

CPU

 2xAMD Epyc 7742 (64Core, 128 Thread)

RAM

 512 GB

GPU

 2xQuadroRTX8000 (48gig)

SSD

 2x1.92TB

 

Do you think this hardware will be sufficient for 50 concurrent users? Is it perhaps overkill?

 

Am I understand correctly that each user will basically have access to roughly 2 cores 2 threads, 10gig RAM, minimum 1gig VRAM, 60 gigs or so of SSD storage?

Is it necessary to have at least 1 core available per user?

 

Thanks in advance.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Deanis said:

Am I understand correctly that each user will basically have access to roughly 2 cores 2 threads, 10gig RAM, minimum 1gig VRAM, 60 gigs or so of SSD storage?

Is it necessary to have at least 1 core available per user?

What software are you going to use to provision out the resources and provide access to the server? Generic virtualization does not allow sharing of a GPU, some products do but can be a licensed feature. I also believe it's a licensed Nvidia feature as well, not sure on that. The servers we buy for VDI come with everything needed included.

 

Sharing of GPU resources is currently much more difficult than CPU resources.

 

You got any existing relationships with IT service providers that you could work with to design a solution for your needs? That would be the safest thing to do. Or contact Nvidia directly.

 

4 hours ago, Deanis said:

Do you think this hardware will be sufficient for 50 concurrent users? Is it perhaps overkill?

Depending on the complexity of the projects that's hard to answer. 50 user sessions on a single server is very high, especially for CAD and similar demanding software. You might get better user experience with two lower spec servers and have 25 on each, it'll cost more but not a great deal, depending on hardware configuration.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not going to do a full write up of what I'm trying cus I should sleep but meh.

I'm using condor with hyper-V to try and run centOS for a VPN server.

I've yet to get it to install then I get all the rest of the setup and firewall configs.

And I may get to clone any firewall work I do on my router as we are likely moving from fiber to fixed base wireless as the 300/300 plan is cheaper than the 100/100.

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Deanis said:

 

for cad work.

try 2-4 machines with dual 48-64 core epyc parts.

cad likes 4-8 cores at high clocks.

You'd likely have a better value getting 4-5 mi25 or mi 50.

3 machines would mean 16-18 users a machine 96-128 cores means 5-6 cores 10-12 threads per user with some left over for control.

 

your going to want to approach SI or look for a system architect who can better guide you.

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, leadeater said:

What software are you going to use to provision out the resources and provide access to the server?

Thanks for the response. We will be using Citrix for the provisioning of VMs. Nvidia offers licensing for GPU virtualization. So from the software side everything should be sorted. I am really just trying to get a feel for the hardware. I want to know as much as possible about what I am getting into before reaching out to the IT service providers as I have have been quoted on solutions that just won't work before.

 

43 minutes ago, leadeater said:

Depending on the complexity of the projects that's hard to answer. 50 user sessions on a single server is very high, especially for CAD and similar demanding software. You might get better user experience with two lower spec servers and have 25 on each, it'll cost more but not a great deal, depending on hardware configuration.

Yeah I thought so too. I was actually looking at 3 systems at 50 users per system (should have mentioned that in the original post). Am i right in saying that you can for the most part divide the resources available (number of cores, RAM etc.) by the number of concurrent users to determine what each user would effectively get?  Thanks for the advice thus far.

 

25 minutes ago, GDRRiley said:

You'd likely have a better value getting 4-5 mi25 or mi 50.

Thanks for the advice. I will look into these. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Deanis said:

Yeah I thought so too. I was actually looking at 3 systems at 50 users per system (should have mentioned that in the original post). Am i right in saying that you can for the most part divide the resources available (number of cores, RAM etc.) by the number of concurrent users to determine what each user would effectively get?  Thanks for the advice thus far.

Yes you can, Nvidia's pretty good at that now. Nvidia use profiles which divide up the GPU in fixed amounts. My information on all of this is rather old though so I don't want to say too much in case it's wrong/out of date. Citrix will do everything you need though, that I do know.

 

Have a read through this: https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, leadeater said:

Yes you can, Nvidia's pretty good at that now. Nvidia use profiles which divide up the GPU in fixed amounts. My information on all of this is rather old though so I don't want to say too much in case it's wrong/out of date. Citrix will do everything you need though, that I do know.

 

Have a read through this: https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html

Thanks a lot, that confirms my initial thoughts. I will look through that link.

 

As far as CPU goes am I right in saying that with 2 64 core 128 thread CPUs in a system, each of the 50 users can have the equivalent of 5 vCPUS?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Deanis said:

As far as CPU goes am I right in saying that with 2 64 core 128 thread CPUs in a system, each of the 50 users can have the equivalent of 5 vCPUS?

I think that is roughly around the specs we use for the video editing VDI VMs, for that we tend to have RAM being the limiting factor on the host. 50 per host still sounds a lot to me but I'm still living in the Intel world with 16-24 cores per CPU. You are targeting 2:1 vCPU to pCPU ratio which is nice a low.

 

For me I think the biggest factor will be how the software is getting used, project size and if they are going to run simulations (FEA and CFD) and user expectation on how quickly these need to run.

 

Also watch out for storage I/O, that 2x 1.92TB going to be NVMe or SATA class?

Link to comment
Share on other sites

Link to post
Share on other sites

@Deanis If you want to discuss this here that's fine, it's on topic and we're happy to help how we can (though it's a little over my head :P)

but you could have made your own post (topic) for an issue like this. More people would have seen it.

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, leadeater said:

I think that is roughly around the specs we use for the video editing VDI VMs, for that we tend to have RAM being the limiting factor on the host. 50 per host still sounds a lot to me but I'm still living in the Intel world with 16-24 cores per CPU. You are targeting 2:1 vCPU to pCPU ratio which is nice a low.

Good to know.

 

35 minutes ago, leadeater said:

For me I think the biggest factor will be how the software is getting used, project size and if they are going to run simulations (FEA and CFD) and user expectation on how quickly these need to run.

At 50 users we don't need mind blowing performance. It just needs to work. The nice thing with a server set up is the flexibility. Can also have less users with more perfomrance per user if simulations are more intensive.

 

36 minutes ago, leadeater said:

Also watch out for storage I/O, that 2x 1.92TB going to be NVMe or SATA class?

Definitely NVMe. Not sure SATA will handle that load.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Windows7ge said:

@Deanis If you want to discuss this here that's fine, it's on topic and we're happy to help how we can (though it's a little over my head :P)

but you could have made your own post (topic) for an issue like this. More people would have seen it.

Thanks I was debating posting here or making my own topic. I think I have got what I needed though :) 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Deanis said:

Thanks I was debating posting here or making my own topic. I think I have got what I needed though :) 

You got lucky that @leadeater was creeping in the Server & NAS sub-forum again and felt like looking at who commented on the General Virtualization Discussion Thread.

 

But it's good knowing you got what you came for. If you need help again feel free to ask. (Hopefully I'll have something I can contribute then).

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×