Jump to content

Crazy idea for a VM work station,help and advice would be appreciated

I go to school for Interactive Media (3D/2D animation,graphic design, and video editing), and my class uses extremely under-powered and outdated computers that can barely run the programs we use. Recently my Prof. bought 2 
HP workstations for around 1900 a pop and he does not have the money to replace all the work stations... so my idea was to build a custom server for running about 25 VM's at once, each VM Connecting through a Thunder bolt 2 or 3 pass through and a Elgato Dock or something similar(like linus does in his house). Each server so far in theory (im willing to change the specs), would be 2- AMD Epyc 760, Western Digital Black 4tb hdd, 4- Nividia Quadro P6000, and 1tb wd Blue ssd (for boot drive), and im still undecided for the mobo and ram;and some risers and thunderbolt 2 or 3 expansion cards. Im trying to make it cheaper than just buying 25 new workstations, so do you guys think this setup will work?

Link to comment
Share on other sites

Link to post
Share on other sites

I personally would not build 1 machine with more than 15 vms. You might be better off going with a few (about 3) lower core count machines. I haven't priced this out yet but I would recommend perusing this idea because fewer vms is usually more stable. For mobo, case and psu try looking into some bare bones systems from someone like Supermicro.  Please let me know if this helps.

Link to comment
Share on other sites

Link to post
Share on other sites

no. don't 

you would need a system like LTT used that fits 12 gpus then you could use 2 boxes (I don't believe suppermicro has updated it to epic yet). everyone could get 5 cores 10 thread if you went with 32 core cpus. 

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Lenovus_ said:

I go to school for Interactive Media (3D/2D animation,graphic design, and video editing), and my class uses extremely under-powered and outdated computers that can barely run the programs we use. Recently my Prof. bought 2 
HP workstations for around 1900 a pop and he does not have the money to replace all the work stations... so my idea was to build a custom server for running about 25 VM's at once, each VM Connecting through a Thunder bolt 2 or 3 pass through and a Elgato Dock or something similar(like linus does in his house). Each server so far in theory (im willing to change the specs), would be 2- AMD Epyc 760, Western Digital Black 4tb hdd, 4- Nividia Quadro P6000, and 1tb wd Blue ssd (for boot drive), and im still undecided for the mobo and ram;and some risers and thunderbolt 2 or 3 expansion cards. Im trying to make it cheaper than just buying 25 new workstations, so do you guys think this setup will work?

Okay you need to think this through, and find out if what you want to do is even possible.

 

1st, You spec the server with 4x Quadro P6000's. Does that mean you're intending on each server hosting 4 VM's, with each VM dedicated to a user, with PCIe Passthrough to give them direct access to the P6000?

 

Or, are you intending on distributing the power of the P6000 over a number of users (Eg: All 25 users connect to one server, and each of the P6000's are shared between either 6 or 7 VM's)?

 

If the former (P6000 per user), that's goddamn stupid (no offense), because a P6000 costs like $5000, and is way more expensive by itself compared to the $1900 HP workstation.

 

If the latter, can your Media software even work in such a scenario where GPU power is shared among multiple VM's?

 

Personally, I think these types of systems are "neat", but ultimately not good for "production use". It's one thing to host servers on a VM, especially since any business who can afford to do so, is using clusters with failover protection anyway.

 

But in a classroom, if all 25 students are relying on one "Server" to act as the host for their "VM" workstation, that's one giant point of failure.

 

Right now, if a single workstation goes down, one student loses a bit of work (perhaps two have to share one station). But if the VM server goes down, well you might as well cancel class, because no one is getting any work done.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×