Jump to content

5 VMs Build Machine in a Rack

little side question: why does a GT1030 need a 8x Link?

SLI would require a 8x Link.

with 5 GPUs and all in 8x, i don't know any Threatripperboard that can handle it, but on Intel i would know several Boards(like the ASUS WS X299 SAGE or ASUS X99-E WS 10G)

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, tankhunter8192 said:

little side question: why does a GT1030 need a 8x Link?

SLI would require a 8x Link.

with 5 GPUs and all in 8x, i don't know any Threatripperboard that can handle it, but on Intel i would know several Boards(like the ASUS WS X299 SAGE or ASUS X99-E WS 10G)

 

they don't need a 8x link. you can use a 1x link here(at a bit slower of a speed).

 

 

sli won't be used here and the 1030 doesn't support it anyways.

 

threadripper isn't great here due to bad iommu support.

 

Something like a grid k1 would work well here as its a single card with 4 gpus that can be split more in software if wanted.

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, tankhunter8192 said:

little side question: why does a GT1030 need a 8x Link?

SLI would require a 8x Link.

with 5 GPUs and all in 8x, i don't know any Threatripperboard that can handle it, but on Intel i would know several Boards(like the ASUS WS X299 SAGE or ASUS X99-E WS 10G)

You can only use what the motherboards are actually configured to use and most of them split lanes out to slots in x16, x8 and x4 and slots active links change depending on what is plugged in. Boards with PLX switches help a lot with this but they do cost more but well within budget here, boards without PLX often have slots disabled when you populate others even if the CPU total lanes are fully used.

 

31 minutes ago, Electronics Wizardy said:

Something like a grid k1 would work well here as its a single card with 4 gpus that can be split more in software if wanted.

Would a GRID K1 fit in the budget with everything else required? Kind of not pushing for a used server since it's actually work related so a new build would be rather tight with half of it used by that GPU. Not used to looking and thinking of things in USD though.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Would a GRID K1 fit in the budget with everything else required? Kind of not pushing for a used server since it's actually work related so a new build would be rather tight with half of it used by that GPU. Not used to looking and thinking of things in USD though.

 k1's are old chips, so you can't get them new any more.

 

A lot of 1030's isn't a support solution so with gpu virtualization(even remotefx wants a pro-rade gpu) your paying a good amount if you want supported config here.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Electronics Wizardy said:

A lot of 1030's isn't a support solution so with gpu virtualization(even remotefx wants a pro-rade gpu) your paying a good amount if you want supported config here.

True, though unlike ESXi RemoteFX will use any GPU with the correct DX version so it's possible to go unsupported easier than it is on the ESXi route. With the good and the bad that brings.

 

Hell have to wonder if simply buying 5 AMD APU systems is an option here.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

True, though unlike ESXi RemoteFX will use any GPU with the correct DX version so it's possible to go unsupported easier than it is on the ESXi route. With the good and the bad that brings.

 

Hell have to wonder if simply buying 5 AMD APU systems is an option here.

Yea I was thinking 5 nucs might be a better solution here.

 

If you want a working box with a config from dell or simmilar your thinking 10K +

Link to comment
Share on other sites

Link to post
Share on other sites

Wouldn't 5 APUs means 5 computer? At this point, it get more expensive because of duplication of PSU, RAM, HDD, MB, Case, etc., no?

 

I guess it could get cheaper if the APU are cheap, but I need something fast on the CPU side. 

Right now, the 1950x is 999$, and is pretty beastly. The build I'm looking at is 2400 CAD, if sharing a single GPU works. 

 

Can I get cheaper and a good performance by building 5 units?

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, CradleGames said:

Wouldn't 5 APUs means 5 computer? At this point, it get more expensive because of duplication of PSU, RAM, HDD, MB, Case, etc., no?

 

I guess it could get cheaper if the APU are cheap, but I need something fast on the CPU side. 

Right now, the 1950x is 999$, and is pretty beastly. The build I'm looking at is 2400 CAD, if sharing a single GPU works. 

 

Can I get cheaper and a good performance by building 5 units?

can we see the build parts?

 

THe problem with vms is that if you want it done right, your paying alot of money,

 

How will this connect to the other servers and systems at your company? Do you already have vm hosts, might be time to add that aswell.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Electronics Wizardy said:

can we see the build parts?

 

THe problem with vms is that if you want it done right, your paying alot of money,

 

How will this connect to the other servers and systems at your company? Do you already have vm hosts, might be time to add that aswell.

 

 

1950X

Gigabyte X399 Designare

Zotac GT 1030 (only one if it can be properly be shared)

4x8Gb DDR4 2666

Corsair HX750

Cooler Master MA621P

2 x EVO 970 250Gb in Raid 0

 

And maybe an extra ethernet card in case the single port get flooded by 5 VMs trying to use it all at once.

 

In a 4U chassis. 

 

I'm not sure to understand your question about connecting to other server. 

We have a server hosting a SVN depot. The build machine just need to do a refresh off SVN once every few minutes. It doesn't need any kind of fancy inter-connectivity. 

 

And what do you mean by VM host? Isn't the idea to run Window Server on that machine, with 5 VM running Window Enteprise. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, CradleGames said:

1950X

Gigabyte X399 Designare

Zotac GT 1030 (only one if it can be properly be shared)

4x8Gb DDR4 2666

Corsair HX750

Cooler Master MA621P

2 x EVO 970 250Gb in Raid 0

 

And maybe an extra ethernet card in case the single port get flooded by 5 VMs trying to use it all at once.

 

In a 4U chassis. 

 

I'm not sure to understand your question about connecting to other server. 

We have a server hosting a SVN depot. The build machine just need to do a refresh off SVN once every few minutes. It doesn't need any kind of fancy inter-connectivity. 

 

And what do you mean by VM host? Isn't the idea to run Window Server on that machine, with 5 VM running Window Enteprise. 

So you don't already have a vm cluster.

 

For a rackmount server, id just get something like a r740 here, fully supported config, much better rackmount integration(idrac, can fit much more in them). 

 

This should work(ram is on the low side), but its consumer hardware and diy, so when you run into problems its gonna be a pain to work with.

 

have your tested your program in hyper-v yet? do that. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Electronics Wizardy said:

So you don't already have a vm cluster.

 

For a rackmount server, id just get something like a r740 here, fully supported config, much better rackmount integration(idrac, can fit much more in them). 

 

This should work(ram is on the low side), but its consumer hardware and diy, so when you run into problems its gonna be a pain to work with.

 

have your tested your program in hyper-v yet? do that. 

I have no idea why I would need a VM cluster, since all VMs are inside a single computer.

And even if I had 5 different computer... why would I need a cluster?

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, CradleGames said:

I have no idea why I would need a VM cluster, since all VMs are inside a single computer.

And even if I had 5 different computer... why would I need a cluster?

No i was asking if your work already has one. Lots of places already have a cluster for vms.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Electronics Wizardy said:

No i was asking if your work already has one. Lots of places already have a cluster for vms.

 

 

We have a rack with our depot, share, etc. But nothing beyond that.

 

New company. :P

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, CradleGames said:

We have a rack with our depot, share, etc. But nothing beyond that.

 

New company. :P

Then go try it on hyper-v.

 

Talk about setting up a virtualization setup, its better to have the intergrated with the rest of the IT infrastructure than its own box.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Electronics Wizardy said:

Then go try it on hyper-v.

 

Talk about setting up a virtualization setup, its better to have the intergrated with the rest of the IT infrastructure than its own box.

No clue what "integrated with the rest" would even mean, or what advantage it would give.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, CradleGames said:

No clue what "integrated with the rest" would even mean, or what advantage it would give.

Have you tested this in hyper-v yet? Lets make sure it works in hyper-v before continuning and what type of gpu it needs.

 

Do you want support for this server? 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Electronics Wizardy said:

Have you tested this in hyper-v yet? Lets make sure it works in hyper-v before continuning and what type of gpu it needs.

 

Do you want support for this server? 

Support? Support of what?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, CradleGames said:

Support? Support of what?

support for the server, so if something failed, someone would come out and fix the system. WIth servers from companies like dell, you get next day or faster onsite warranties, where with diy or used it can take a month or more to ship a part in and have it replace. 

 

How bad is downtime for you?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

support for the server, so if something failed, someone would come out and fix the system. WIth servers from companies like dell, you get next day or faster onsite warranties, where with diy or used it can take a month or more to ship a part in and have it replace. 

 

How bad is downtime for you?

Bad, but I can fix stuff faster than Dell could ever do. All the computers in our company have been hand-built. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, CradleGames said:

Bad, but I can fix stuff faster than Dell could ever do. All the computers in our company have been hand-built. 

you don't seem to get how fast dell can fix things. They have upto 4 hour onsite support. You can't get a new motherboard in 4 hours.

 

Really, if you need uptime, get a supported solution from dell, hpe, lenovo. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

you don't seem to get how fast dell can fix things. They have upto 4 hour onsite support. You can't get a new motherboard in 4 hours.

 

Really, if you need uptime, get a supported solution from dell, hpe, lenovo. 

For the build machine, uptime isn't needed. 

But honestly... no, Dell doesn't have same day onsite support in all cities. And I'm in Québec City. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, CradleGames said:

For the build machine, uptime isn't needed. 

But honestly... no, Dell doesn't have same day onsite support in all cities. And I'm in Québec City. 

have you tried hyper-v yet, do that.

 

dell support will cover you there, but if you don't care about uptime, then no reason to worry about support.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, CradleGames said:

Dell doesn't have same day onsite support in all cities. And I'm in Québec City. 

Dell server support does offer this, Dell desktop support doesn't. I'm in a city in NZ that isn't that big and can get 4 hr support from Dell, HPE, Netapp etc and have had to use it. They'll tell you if they can't deliver the support level you request before committing to it and making you pay for it.

 

Not matter what you go with though you should actually test the setup on something you have with a single VM or two, you won't be able to return the equipment if it doesn't work.

 

12 hours ago, CradleGames said:

I guess it could get cheaper if the APU are cheap, but I need something fast on the CPU side. 

Right now, the 1950x is 999$, and is pretty beastly. The build I'm looking at is 2400 CAD, if sharing a single GPU works. 

 

Can I get cheaper and a good performance by building 5 units?

The actual CPU cores in an AMD APU are exactly the same as in Threadripper and other Ryzen desktop CPUs, that's actually what makes them so good.

 

If you go with the 16 core 1950X and divide that down by 5 that's only 3.2 cores effective per VM, 5 APUs with 4 cores is actually 20 cores total so would be faster and you would have a GPU that is more powerful than the GT 1030 for each system.

 

You would be doubling up on certain components yes but the APUs and motherboard for them are cheap, total price may be the same all up or slightly more but it won't be by a lot.

 

It is worth considering and pricing at least, so you can know how things stack up.

 

Generally speaking when it comes to games, graphics rendering and the like VMs are not the best idea and while you can make it work it often requires more effort than it is worth or a much higher end tailored system that is design specifically for it which is more than just the hardware but also software as well i.e. VMware Horizon.

Link to comment
Share on other sites

Link to post
Share on other sites

5 x

 

- 2200G

- ASRock A320M-DSG

- 2x4Gb DDR4 2666

- 128 Gb NVMe

SUPERMICRO SuperChassis CSE-512L-200B Black 1U (include 200W PSU)

 

2,400$.

 

And I can use some Dynatron A18 as cooling in a 1U unit. Been using them in some mini-itx build, and they perform very well, if a bit noisy.

 

So 2,650$ with the coolers. 

 

Would the quad-core of a 2200G really perform as well as 3 core off a 1950x?

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, CradleGames said:

Would the quad-core of a 2200G really perform as well as 3 core off a 1950x?

yea about the same. Exact same cpu cores, only difference is clock and no smt on the 2200g.

 

Probably going to be similar. With vms, one vm could use the whole 1950x if needed, while with separate boxes there limited to 4 cores max, even if others are idle. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×