Jump to content

Hi, I am trying to find information about multi GPU system where each GPU is dedicated to other tasks.

 

I currently have RTX 3060ti in my system and plan to upgrade to the 50 series soon.

I want to keep the my current GPU and dedicate it strictly for machine learning.

 

I have the gigabyte z690 gaming mother board, so I know I have additional slot for GPU (PCI 3.0 x4) and since I don’t want the secondary GPU to do anything but machine learning and research purposes I am okay with it.

everything find talks about combining the two GPUs for the same task.

Link to comment
https://linustechtips.com/topic/1610090-dual-gpu-for-two-different-tasks/
Share on other sites

Link to post
Share on other sites

1 minute ago, Ido_sch said:

Hi, I am trying to find information about multi GPU system where each GPU is dedicated to other tasks.

 

I currently have RTX 3060ti in my system and plan to upgrade to the 50 series soon.

I want to keep the my current GPU and dedicate it strictly for machine learning.

 

I have the gigabyte z690 gaming mother board, so I know I have additional slot for GPU (PCI 3.0 x4) and since I don’t want the secondary GPU to do anything but machine learning and research purposes I am okay with it.

everything find talks about combining the two GPUs for the same task.

Do you plan on machine learning while you are gaming?

Link to post
Share on other sites

if your software can talk to a GPU directly it's just a matter of having the GPU present, and telling the software to use that one.

 

if your software relies on the OS to direct it's calls to "the GPU", you're probably SOL.

 

in essence a GPU is just a device in the system, it's up to the software to support directing calls to a specific piece of hardware. in this sense it's not too dissimilar from how multiple storage drives work: it's up to the software to support not having your stuff saved in "my documents".

Link to post
Share on other sites

13 minutes ago, manikyath said:

if your software can talk to a GPU directly it's just a matter of having the GPU present, and telling the software to use that one.

 

if your software relies on the OS to direct it's calls to "the GPU", you're probably SOL.

 

in essence a GPU is just a device in the system, it's up to the software to support directing calls to a specific piece of hardware. in this sense it's not too dissimilar from how multiple storage drives work: it's up to the software to support not having your stuff saved in "my documents".

So basically if I write a python script to do some model training, if there is a GPU present I can tell it to use it instead of my cpu?

Link to post
Share on other sites

On 4/26/2025 at 8:08 AM, Ido_sch said:

, if there is a GPU present I can tell it to use it instead of my cpu?

Yes, you can just say it to use "cuda:0" or "cuda:1" as your device.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×