Jump to content

Best Config for ML + virtualization + Gaming

Guys I need help, I am planning to create a multipurpose system which I can use to train model, host it as a server(because i stay away from home a lot of times), have multiple virtual systems running and part time gaming. I was looking into :

  • Ryzen 7 2700x
  • ASUS ROG Hero VII
  • 32 GB RAM
  • 2 X 512 M.2 for storage
  • 2TB HDD for storing junk collected over the years
  • 2 X GeForce RTX 2080 Ti GAMING X TRIO 

I have bundle of questions.

  1. Should I look to the team blue(by that i mean intel)?
  2. Should I wait for Ryzen 3rd gen?
  3. Would the performance reduce if the to graphic cards are connected with SLI?
  4. I have a dedicated IP, would I be needing an extra NIC for better performance? 
  5. In a hypothetical situation, under which condition would I ever need to overclock my CPU or GPU or RAM?
Link to comment
Share on other sites

Link to post
Share on other sites

1. Team Blue released the latest 9000 series, but for most people it is incredibly expensive for what you get.
2. Yes
3. Sometimes... SLI is not supported in most applications officially. You can hack/slash some stuff together to get a little extra performance but on some titles you will see a performance degradation.
4. I do not believe so.
5. You would never need to overclock. The option is there if you WANT to do that. You can easily overclock CPU/GPU/RAM.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Vivek Jha said:

Would the performance reduce if the to graphic cards are connected with SLI? 

CUDA applications ignore SLI. This is why I take every opportunity to say that I think Nvidia should push CUDA as an open specification, then we could build graphics libraries on it and get some pretty awesome multi-GPU scaling that works for all platforms. (They should probably do the same thing with NVLink)

People hate on SLI for gaming, but I've had good luck with it. Since your primary concern is ML, dual GPUs can speed CUDA applications like TensorFlow and MatLab up quite a bit for large sets. CUDA scales on both heterogenous and homogenous GPU architectures, with or without SLI bridges.
 

30 minutes ago, Vivek Jha said:

In a hypothetical situation, under which condition would I ever need to overclock my CPU or GPU or RAM?

Probably not for a while since it's a brand new build. That's especially true if science is your primary concern: You want long life and good stability. You may also consider ECC ram.
 

 

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×