Jump to content

Workstation / server planning to work with Nvidia Isaac robotics simulator

hitalio1

 

Budget (including currency): TBD

 

Country: Argentina / USA

 

Games, programs or workloads that it will be used for: 3-4 virtual machines to run an instance of Nvidia Isaac simulator each (listed requirements).

 

Other details:

- Buying from scratch

 

 

Hi all!

Kind of new to the forum, I'm looking for guidance and ideas on a build we are planning to do in my job. I'm not looking for a well defined system, but more for ideas and approaches I could take to tackle this. This is a project in it's super early planning stage.

 

The idea is to build a workstation / server to enable developers to work remotely with Nvidia's Isaac robotics simulator (requirements linked above). What I had in mind was building upon an AMD Threadripper or Epyc platform and create 3-4 virtual machines for people to work at. I was planning on buying enough RAM, as well as one storage disk and one Nvidia Quadro per VM. The Quadro model is yet to be defined, but ideally we would go with an A5000 or A6000, and if not, a high end RTX card (3080 or more).

 

I should also say that I'm not an expert when it comes to servers or workstations, so any opinions or suggestions are more than welcome. Is the approach I'm thinking of too far off?

 

ps: We thought about cloud based solutions and actually tested AWS but found it was not nice to work in (ar least for the way we intend to use it)

Link to comment
Share on other sites

Link to post
Share on other sites

What is your reasoning behind a single system running multiple VM?

80+ ratings certify electrical efficiency. Not quality.

 

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, brob said:

What is your reasoning behind a single system running multiple VM?

Cost, convenience and ease of maintenance. Do you think it's easier to make multiple simpler machines instead?

Link to comment
Share on other sites

Link to post
Share on other sites

Multiple simple machines have a number of advantages. Down time for maintenance and repair is much less as only one box need be down at any one time. Expansion and upgrades can be done in smaller increments. 

 

I expect cost will be quite similar.

 

80+ ratings certify electrical efficiency. Not quality.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Hi,

 

Putting the "whether or not it is a good idea to run a single machine" argument aside, you should be able to run multiple Docker containers from a Linux host with separate ISAAC instances. Your developers then use one of the remote clients to connect to the instances from whatever hardware they want to use.

 

Depending on what you want to simulate, a single 3080 might not even be enough for a single instance. Given your question and mention of previous AWS attempts, you probably know what hardware might be required to run one instance, which might also inform if you can share a single GPU between multiple instances. One benefit (with other drawbacks, as @brob mentioned) of a single machine would be that you can spin up a varying amount of Docker instances depending on the need and manage the resources between them a little better instead of having them physically separated. 

 

Cheers

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, brob said:

Multiple simple machines have a number of advantages. Down time for maintenance and repair is much less as only one box need be down at any one time. Expansion and upgrades can be done in smaller increments. 

 

I expect cost will be quite similar.

 

That makes a lot of sense. I'll need to study costs and decide based on that, thanks!

 

9 hours ago, Elrond said:

Hi,

 

Putting the "whether or not it is a good idea to run a single machine" argument aside, you should be able to run multiple Docker containers from a Linux host with separate ISAAC instances. Your developers then use one of the remote clients to connect to the instances from whatever hardware they want to use.

 

Depending on what you want to simulate, a single 3080 might not even be enough for a single instance. Given your question and mention of previous AWS attempts, you probably know what hardware might be required to run one instance, which might also inform if you can share a single GPU between multiple instances. One benefit (with other drawbacks, as @brob mentioned) of a single machine would be that you can spin up a varying amount of Docker instances depending on the need and manage the resources between them a little better instead of having them physically separated. 

 

Cheers

Hi Lord Elrond! Thanks for the reply!

 

We actually generally use docker containers as a development environment. That being said, I would need to check how connecting to remote docker containers and working with graphical tools responds. I'd also need to study the possibilities that docker has regarding resource management if we have multiple GPUs, as I know you can allocate CPU cores and RAM to containers individually, but never tried allocating resources with multiple GPUs are present.

 

Really helpful, I'll be sure to study this out. Thanks again for the replies

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×