Jump to content

4 gamers 1 cpu and 1 renderer! Analyze my plan.

Hello forum!

 

This isnt really going  to be a 4 gamer 1 cpu build but a 4 users 1 cpu build with a master. I got inspired by the linus 2/7 gamers 1 pc video series and I am thinking of doing something like that for my company to save money. A lot of you are more experienced then me so I would love your input on this topic and will it actually work.

 

unRAID the brain of this build will be used like Linus!

The pc components are going to be an Intel xeon E5 2630 v4 10 core 20 thread on a x99 lga 2011v3 motherboard with 5 pciex slots.

There are going to be 5 gtx 1050 single slot gpus (5 because xeon doesnt have an igpu so one for boot)

64gb of ram, 5 250gb SSDs and 4tb of HDDs

800+ corsair PSU with a fine case and cooling.

 

Setting up 4 virtual machines like Linus did in 2 gamers 1 cpu shouldnt be a problem and it will work just fine. I want to know tho is something like this stable? Software side, is it stable. When i restart the virtual machines is all my data safe? I will probably raid up the storage for security.

 

What I want to know next and what my main problem is. Is there a way to create a Master virtual machine. A virtual machine that wouldnt work at the same like others but I would be able to power off the machine with the 4 other virtual machines and boot up in to the new virtual machine with the control of all of the 64gb and all 10 cores for rendering (cpus based) and it will use its own 5th 250gb ssd for os and one of the other nvidia gtx1050s usually used by a virtual machine that is turned of right now. Can unRAID do that?

 

Thank you for your input. Please feel free to give me other ideas and such things.

 

 

MAIN BUILD!

Link to comment
Share on other sites

Link to post
Share on other sites

That's effectively 2 cores/4 threads per VM. Not very gaming optimal.

 

Not sure unRAID can create a MVM, but the console allows a fair bit of interaction with the VMs while they're running/ not running..

Quote and/or tag people using @ otherwise they don't get notified of your response!

 

The HUMBLE Computer:

AMD Ryzen 7 3700X • Noctua NH-U12A • ASUS STRIX X570-F • Corsair Vengeance LPX 32GB (2x16GB) DDR4 3200MHz CL16 • GIGABYTE Nvidia GTX1080 G1 • FRACTAL DESIGN Define C w/ blue Meshify C front • Corsair RM750x (2018) • OS: Kingston KC2000 1TB GAMES: Intel 660p 1TB DATA: Seagate Desktop 2TB • Acer Predator X34P 34" 3440x1440p 120 Hz IPS curved Ultrawide • Corsair STRAFE RGB Cherry MX Brown • Logitech G502 HERO / Logitech MX Master 3

 

Notebook:  HP Spectre x360 13" late 2018

Core i7 8550U • 16GB DDR3 RAM • 512GB NVMe SSD • 13" 1920x1080p 120 Hz IPS touchscreen • dual Thunderbolt 3

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, vojta.pokorny said:

That's effectively 2 cores/4 threads per VM. Not very gaming optimal.

 

Not sure unRAID can create a MVM, but the console allows a fair bit of interaction with the VMs while they're running/ not running..

I used gaming in the name for the linus video reference, the users are going to use it for office and photoshop work. 1 user will have 8 threads for solidworks

MAIN BUILD!

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, MilitantCro said:

Anyone got any input? Anyone tried something like this.

I have some input. Don't try it. It's way more expensive in the long run, and just a pain in the ass to work with. You can get better performance getting 5 used Nehalem based workstations than virtualizing it. 

 

Here is a dell t3500 which has a 6 core hyperthreaded cpu and should be able to fit that 1050 you are talking about: http://www.ebay.com/itm/Dell-Precision-T3500-Xeon-X5675-3-06GHz-Hex-Core-12GB-2TB-DVD-RW-Win10-Pro-CD-/381868594003

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, tt2468 said:

I have some input. Don't try it. It's way more expensive in the long run, and just a pain in the ass to work with. You can get better performance getting 5 used Nehalem based workstations than virtualizing it. 

 

Here is a dell t3500 which has a 6 core hyperthreaded cpu and should be able to fit that 1050 you are talking about: http://www.ebay.com/itm/Dell-Precision-T3500-Xeon-X5675-3-06GHz-Hex-Core-12GB-2TB-DVD-RW-Win10-Pro-CD-/381868594003

The main goal is to save money while upgrading our rendering machine. This way 4 employs can work during the day on that pc (these employs need PCs either way) and during the night the machine is rendering in keyshot with full 10 cores.

 

The one you linked isnt really an upgrade and wouldnt even load our projects with only 12gb of ram.

MAIN BUILD!

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, MilitantCro said:

The main goal is to save money while upgrading our rendering machine. This way 4 employs can work during the day on that pc (these employs need PCs either way) and during the night the machine is rendering in keyshot with full 10 cores.

 

The one you linked isnt really an upgrade and wouldnt even load our projects with only 12gb of ram.

RAM can easily be upgraded and a used dual CPU server can be found on ebay for cheap, really cheap.

 

Virtualizing the loads you are talking about is a bad idea, plus puts a lot of risk in to a single system.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

RAM can easily be upgraded and a used dual CPU server can be found on ebay for cheap, really cheap.

 

Virtualizing the loads you are talking about is a bad idea, plus puts a lot of risk in to a single system.

We arent US based and we cant buy used parts, all the components are funded and stuff from the government and EU.

MAIN BUILD!

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, MilitantCro said:

We arent US based and we cant buy used parts, all the components are funded and stuff from the government and EU.

Not that I'd advise doing this but if you need that many GPUs in a single system it'll need to be dual CPU due to PCIe lanes required.

 

You would also need 10Gb networking if you need access to a NAS/storage server with that many users sharing a single NIC in that way. Storage I/O will be fine if you dedicate an SSD to each VM, think that is your plan, and then backup to the HDD RAID.

 

It's never the technical specifications what makes this type of thing not work, it's all the small things. USB devices, day to day usage, where to put the system and cabling it, diagnosing problems is harder, system maintenance effects everyone.

 

My general advice when it comes to servers and networking is don't do what Linus does, he's no expert which he says himself and much of the issues he faces doesn't make it in to the video and he also never runs the systems long enough to truly find the problems with it. A good example of when he has is the optical thunderbolt HUBs, those he had to stop using due to ongoing issues.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Not that I'd advise doing this but if you need that many GPUs in a single system it'll need to be dual CPU due to PCIe lanes required.

 

You would also need 10Gb networking if you need access to a NAS/storage server with that many users sharing a single NIC in that way. Storage I/O will be fine if you dedicate an SSD to each VM, think that is your plan, and then backup to the HDD RAID.

 

It's never the technical specifications what makes this type of thing not work, it's all the small things. USB devices, day to day usage, where to put the system and cabling it, diagnosing problems is harder, system maintenance effects everyone.

 

My general advice when it comes to servers and networking is don't do what Linus does, he's no expert which he says himself and much of the issues he faces doesn't make it in to the video and he also never runs the systems long enough to truly find the problems with it. A good example of when he has is the optical thunderbolt HUBs, those he had to stop using due to ongoing issues.

There are single cpu boards with 5 pcie lanes and i need 5 pcie lanes, doesnt matter they are 4x since the gpus are for running the display only. Storage will be built in to it with 2 cores dedicated to running that server.  Location also isnt a problem.

 

I dont know, I will try it out. If it doesnt work I still have a 10 core rendering machine with a titan in it which i need anyways.

MAIN BUILD!

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, MilitantCro said:

There are single cpu boards with 5 pcie lanes and i need 5 pcie lanes, doesnt matter they are 4x since the gpus are for running the display only. Storage will be built in to it with 2 cores dedicated to running that server.  Location also isnt a problem.

 

I dont know, I will try it out. If it doesnt work I still have a 10 core rendering machine with a titan in it which i need anyways.

You need to be very careful when picking a motherboard, it's not the slots that's the problem but the number of PCIe lanes and how they are assigned.

 

Read the manual carefully and make sure slots don't get disabled in certain combinations, this is a thing. PCIe lane assignment isn't as dynamic as you might expect, it's not a matter of having 40 from the CPU, taking away the ones used for motherboard features and then straight dividing across the slots as that's not actually how motherboard makers do it or even possible to do it that easily.

 

There are workstation motherboards that have PLX chips to get around the issue of assigning PCIe lanes to slots so this might be your best bet, however expensive but not too greatly. https://www.asus.com/nz/Motherboards/X99E_WS/specifications/

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MilitantCro said:

There are single cpu boards with 5 pcie lanes and i need 5 pcie lanes, doesnt matter they are 4x since the gpus are for running the display only. Storage will be built in to it with 2 cores dedicated to running that server.  Location also isnt a problem.

 

I dont know, I will try it out. If it doesnt work I still have a 10 core rendering machine with a titan in it which i need anyways.

If the GPU's are for Display Only, why are you wasting money on GTX 1050's? Go to your local computer store (or preferred online retailer) and pick up the cheapest GPU they sell - eg: A Radeon HD5450 or a GTX 710, etc.

 

EDIT: I also hope you've considered a proper backup solution - especially since you say this is for a business. Remember, RAID does not equal backup.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, dalekphalm said:

If the GPU's are for Display Only, why are you wasting money on GTX 1050's? Go to your local computer store (or preferred online retailer) and pick up the cheapest GPU they sell - eg: A Radeon HD5450 or a GTX 710, etc.

 

EDIT: I also hope you've considered a proper backup solution - especially since you say this is for a business. Remember, RAID does not equal backup.

It needs 4k 60hz support and already got them around

MAIN BUILD!

Link to comment
Share on other sites

Link to post
Share on other sites

if you already have the stuff there, why dont you just try it out and see which problems you run into?

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, MilitantCro said:

It needs 4k 60hz support and already got them around

What hardware to do you have already, and what will you need to purchase?

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, MilitantCro said:

The main goal is to save money while upgrading our rendering machine. This way 4 employs can work during the day on that pc (these employs need PCs either way) and during the night the machine is rendering in keyshot with full 10 cores.

 

The one you linked isnt really an upgrade and wouldnt even load our projects with only 12gb of ram.

And thats why you would usually have a boxy looking thing in the corner crunching your exports so the editing pcs dont have too.

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/24/2017 at 6:07 AM, MilitantCro said:

I used gaming in the name for the linus video reference, the users are going to use it for office and photoshop work. 1 user will have 8 threads for solidworks

If your playing games on the system, might as well use multipoint on windows server. Its much better for this. Justinstall server 2016 or 2012r2 on it and then you can have as many users as you have video outputs and keyboards.

 

No reason to use vm's here.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, tt2468 said:

And thats why you would usually have a boxy looking thing in the corner crunching your exports so the editing pcs dont have too.

Im talking about keyshot not video editing

MAIN BUILD!

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, MilitantCro said:

There are single cpu boards with 5 pcie lanes and i need 5 pcie lanes, doesnt matter they are 4x since the gpus are for running the display only. Storage will be built in to it with 2 cores dedicated to running that server.  Location also isnt a problem.

 

I dont know, I will try it out. If it doesnt work I still have a 10 core rendering machine with a titan in it which i need anyways.

Has anyone explained yet that PCIe lanes and slots are not the same thing?

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, no, this idea is just bad.  When Linus builds these machines they are 'Because it's cool' machines but his builds are also stupid.  You're basically building multiple computers, for more cost, in one box, and making it a single point of failure.

 

This magic box will be pretty low powered for the end users and if anything in it screws up, every users is incapable of working because it's one giant single point of failure.

 

From a business perspective it's just a bad idea.  You'd get far more reliability using cheap prebuild PCs, maybe even retired small form factor prebuilts recycled from another school or office (It seems you only want a dual core afterall) and then building a dedicated render box with the largest chunk of your budget.

 

The costs of this thing just seems kinda redonk for what you're attempting to achieve.  Five GTX 1050's?  Looking at NewEgg.com as a baseline, the absolute cheapest GTX 1050 is $99 USD but prices go as high as $160 USD.  And these are to make dual core VMs running at only 2.2ghz per core for 'office work' and 'photoshop'.

 

https://www.amazon.com/Compaq-8200-Factor-SP678UP-3-1GHz/dp/B00TRO5Q36

 

Meanwhile you can get this refurbished Compaq 8200 with a quad core i5 2400 for $120 USD.  And that's an entire freaking computer.  We aren't even factoring in things like the SSDs you would have bought for each one.  And that's just what I found in a quick search at 1:40am.  I bet you with some time, you could get a better deal or negotiate a better price if you express interest in buying multiple units.

 

I get you think that this will save money and that it will be awesome, but from a business perspective it's terrible.  The setup is complicated and will likely need more extensive IT resources (which you seem to lack) and downtime is downtime for everyone, you look to lose a good deal of money in wasted paid man hours with downtime.

Don't get me wrong, VMs are GREAT in many enterprise situations with proper setup and support.  But VMs as end user workstations being maintain by someone who 'saw something on a YouTube video' is not great.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, AshleyAshes said:

Yeah, no, this idea is just bad.  When Linus builds these machines they are 'Because it's cool' machines but his builds are also stupid.  You're basically building multiple computers, for more cost, in one box, and making it a single point of failure.

 

This magic box will be pretty low powered for the end users and if anything in it screws up, every users is incapable of working because it's one giant single point of failure.

 

From a business perspective it's just a bad idea.  You'd get far more reliability using cheap prebuild PCs, maybe even retired small form factor prebuilts recycled from another school or office (It seems you only want a dual core afterall) and then building a dedicated render box with the largest chunk of your budget.

 

The costs of this thing just seems kinda redonk for what you're attempting to achieve.  Five GTX 1050's?  Looking at NewEgg.com as a baseline, the absolute cheapest GTX 1050 is $99 USD but prices go as high as $160 USD.  And these are to make dual core VMs running at only 2.2ghz per core for 'office work' and 'photoshop'.

 

https://www.amazon.com/Compaq-8200-Factor-SP678UP-3-1GHz/dp/B00TRO5Q36

 

Meanwhile you can get this refurbished Compaq 8200 with a quad core i5 2400 for $120 USD.  And that's an entire freaking computer.  We aren't even factoring in things like the SSDs you would have bought for each one.  And that's just what I found in a quick search at 1:40am.  I bet you with some time, you could get a better deal or negotiate a better price if you express interest in buying multiple units.

 

I get you think that this will save money and that it will be awesome, but from a business perspective it's terrible.  The setup is complicated and will likely need more extensive IT resources (which you seem to lack) and downtime is downtime for everyone, you look to lose a good deal of money in wasted paid man hours with downtime.

Don't get me wrong, VMs are GREAT in many enterprise situations with proper setup and support.  But VMs as end user workstations being maintain by someone who 'saw something on a YouTube video' is not great.

Right, no professional would ever virtualize. ESXi and Unraid were just developed for lulz. Google and this community can help answer plenty of question. TLDR just scimmed.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, MarcWolfe said:

Right, no professional would ever virtualize. ESXi and Unraid were just developed for lulz. Google and this community can help answer plenty of question. TLDR just scimmed.

Um I certainly hope you aren't saying ESXi isn't useful? Being the most installed hypervisor/OS directly on hardware for the last 10 years by a huge margin to anything else.

 

VMware is a huge company which almost every SMB and Enterprise rely on to deliver IT services to users. If we weren't virtualizing with VMware we wouldn't have the physical space to put all the servers we require and have running, I'll give you a hint it's well over 1000.

 

unRAID certainly isn't the best platform out there I will agree with that but ESXi... do a little market research before trying to write that off as a toy ;).

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, leadeater said:

Um I certainly hope you aren't saying ESXi isn't useful? Being the most installed hypervisor/OS directly on hardware for the last 10 years by a huge margin to anything else.

 

VMware is a huge company which almost every SMB and Enterprise rely on to deliver IT services to users. If we weren't virtualizing with VMware we wouldn't have the physical space to put all the servers we require and have running, I'll give you a hint it's well over 1000.

 

unRAID certainly isn't the best platform out there I will agree with that but ESXi... do a little market research before trying to write that off as a toy ;).

It was sarcasm. I plan to use ESXi myself for... something, just because reasons.

 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, MarcWolfe said:

It was sarcasm. I plan to use ESXi myself for... something, just because reasons.

 

Was hoping that was the case :)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×