Jump to content

Can I set the priority of a dual GPU system?

invaderSnarf
 Share

Just installed an Intel Arc A380 into my main rig to use as a dedicated encoder card and to take advantage of AV1 when it releases, hopefully soon, for OBS after this next beta. 

 

I am running into a few issues. Every game that I run now seems to be defaulting to the A380 rather than my primary GPU, a 2080ti. I noticed that I was dropping frames in OBS because of this so I went to graphics settings in windows and told it to use the 2080 and not the a380 which is now apparently the default gpu. (which is set as "performance" and the 2080ti is apparently "power saver"). But now every program I run seems to be hitting the A380 instead of the 2080ti. The only reason I know this is looking at performance in task manager, the a380 is sitting around 20-30% and the 2080 is maybe 5% no matter what I do. 

 

Is there a way to tell windows to prioritize the use of my 2080ti and ignore the a380 unless I tell an app to use it?

 

Every monitor I have is using the 2080, nothing is plugged into the a380. The 2080 is in the top PCIe slot, the a380 is in the slot below it. I am running a Gigabyte x570 Aorus Ultra motherboard with the latest BIOS update and both the Intel card and Nvidia cards are up to date with their respective drivers.

Link to comment
Share on other sites

Link to post
Share on other sites

Weird.  I wonder if the intel card is being recognized as an iGP or something.  The knuckle Walker work around since the thing isn’t good for anything yet anyway is just pull the card, thus kicking the problem down the road. There is probably an actual solution though.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

I went into device manage and have it disabled at the moment. It made OBS not able to show a preview when I told it to use NVENC, as soon as I disabled it and restarted that cleared it up. I have no idea why my PC thinks a card with half the performance is the "High Performance" card in the system, especially since it is not in the top PCIe lane. (Technically its in lane 13, but the 2080ti I have shows to be in lane 10 so I would think lower would mean higher priority)

 

Its a shame though, I use Vegas Pro for video editing and the intel card plays much nicer with it than my 2080ti for playback, one of the reasons I bought the damn thing. Currently I have to go through the process of disabling it in device manager when i want to play and record any games. Then go back into, enable it, and restart my machine when I want to edit video; then go back and disable it again and restart when I want to render as the 2080ti is, well, a 2080ti and still is a nice little workhorse of a render card. 

 

I was able to run them both at the same time and set the games to use the 2080 on a game by game basis, but its useless when OBS is refusing to actually capture anything when both cards are enabled.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, invaderSnarf said:

I went into device manage and have it disabled at the moment. It made OBS not able to show a preview when I told it to use NVENC, as soon as I disabled it and restarted that cleared it up. I have no idea why my PC thinks a card with half the performance is the "High Performance" card in the system, especially since it is not in the top PCIe lane. (Technically its in lane 13, but the 2080ti I have shows to be in lane 10 so I would think lower would mean higher priority)

 

Its a shame though, I use Vegas Pro for video editing and the intel card plays much nicer with it than my 2080ti for playback, one of the reasons I bought the damn thing. Currently I have to go through the process of disabling it in device manager when i want to play and record any games. Then go back into, enable it, and restart my machine when I want to edit video; then go back and disable it again and restart when I want to render as the 2080ti is, well, a 2080ti and still is a nice little workhorse of a render card. 

 

I was able to run them both at the same time and set the games to use the 2080 on a game by game basis, but its useless when OBS is refusing to actually capture anything when both cards are enabled.

So this is more an OBS specific issue.  I take it there are no options for choosing input source?  Sounds like an OBS software issue.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

It is happening in OBS, I can no longer view video with VLC when the Arc GPU is enabled, I loaded up a project in After Effects and Premiere Pro, both refused to show me any preview (audio is working just fine in all of these apps, so I am assuming that it has to do something with how windows is choosing what GPU to use as a decoder?). The only app that seems to be rendering video is Vegas Pro and I can only assume that is because I could specifcally choose the Arc as the decoder. 

 

I tried plugging a monitor into the Arc to see if that would fix any of the issues, still getting a black screen when trying to playback local video. Youtube and other online video sources still worked though, so I'm at a complete loss still as to what is happening.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, invaderSnarf said:

It is happening in OBS, I can no longer view video with VLC when the Arc GPU is enabled, I loaded up a project in After Effects and Premiere Pro, both refused to show me any preview (audio is working just fine in all of these apps, so I am assuming that it has to do something with how windows is choosing what GPU to use as a decoder?). The only app that seems to be rendering video is Vegas Pro and I can only assume that is because I could specifcally choose the Arc as the decoder. 

 

I tried plugging a monitor into the Arc to see if that would fix any of the issues, still getting a black screen when trying to playback local video. Youtube and other online video sources still worked though, so I'm at a complete loss still as to what is happening.

So various apps are defaulting to using the Intel gpu over the Nvidia one.  I wonder if slot number has anything to do with this?  Is the intel in the #1 slot?

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Bombastinator said:

So various apps are defaulting to using the Intel gpu over the Nvidia one.  I wonder if slot number has anything to do with this?  Is the intel in the #1 slot?

The Intel GPU is in the second slot. My MOBO is filled as follows:

 

PCIe x16 #1: RTX 2080ti 

PCIe x1 #1: covered by RTX 2080ti

PCIe x16 #2: Intel Arc a380

PCIe x1 #2: covered by Intel Arc a380

PCIe x16 #3: Elgato 4k60pro mk2

 

Picture is a rough diagram of where and what I have populating my PCIe slots.

 

mobo mockup.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

also one other thing, downgrading the intel drivers fixes the issue. So it's definitely due to the gpu's drivers. However obs won't support intel av1 without updated drivers in the beta release so this is something that needs a solution.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/3/2022 at 11:13 PM, ViridianArmy said:

also one other thing, downgrading the intel drivers fixes the issue. So it's definitely due to the gpu's drivers. However obs won't support intel av1 without updated drivers in the beta release so this is something that needs a solution.

I've currently disabled the Arc GPU in device manager, I'm not in a position to use the OBS 29 beta as I need the ASIO plugin support (which is barely working as is on OBS 28). Hopefully by the time OBS v29 comes out, they will be supporting AV1 so I'll be able to reactivate it and just have to set the GPU by program each time as I don't see Intel putting this as a priority on their drivers. Ideally, windows would let the user define what should be the power saving option and high performance option.

 

But why would they let the user have that much control over their own system?/s

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/5/2022 at 10:04 AM, invaderSnarf said:

I've currently disabled the Arc GPU in device manager, I'm not in a position to use the OBS 29 beta as I need the ASIO plugin support (which is barely working as is on OBS 28). Hopefully by the time OBS v29 comes out, they will be supporting AV1 so I'll be able to reactivate it and just have to set the GPU by program each time as I don't see Intel putting this as a priority on their drivers. Ideally, windows would let the user define what should be the power saving option and high performance option.

 

But why would they let the user have that much control over their own system?/s

I'd actually expect it to be high priority to intel considering a large portion of interested customers in the a380 are likely going to want one for dual builds as an encoding gpu once av1 is released. Linus's "I bought a second gpu but not for gaming" video being the reason I bought one. No regrets even if this issue doesn't get fixed as the quicksync performance is great but I expect many people including myself would be upset if this doesn't get fixed before av1 encoding is out. I opened a support ticket on intel's site and explained the issue so hopefully this gets resolved soon.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/5/2022 at 10:04 AM, invaderSnarf said:

Ideally, windows would let the user define what should be the power saving option and high performance option.

 

But why would they let the user have that much control over their own system?/s

It actually used to be an option in nvidia control panel which would've solved the issue for both of us but windows blocked the functionality.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/29/2022 at 7:06 PM, invaderSnarf said:

The Intel GPU is in the second slot. My MOBO is filled as follows:

 

PCIe x16 #1: RTX 2080ti 

PCIe x1 #1: covered by RTX 2080ti

PCIe x16 #2: Intel Arc a380

PCIe x1 #2: covered by Intel Arc a380

PCIe x16 #3: Elgato 4k60pro mk2

 

Picture is a rough diagram of where and what I have populating my PCIe slots.

 

mobo mockup.jpg

Uffdah! So 8 4 and 4.  You really really need that Nvidia card to be default.  Still wondering if something or other is assuming the indel dGPU is an iGPU because “intel doesn’t make GPUs so all I have to do is recognize the presence of intel” or something similar.   Bodge SnapBack.  The 2080ti uses just a smidge more than 8 lanes of pcie3 worth of bandwidth, so it’s going to throttle a little.  Not much.  Top slot will be dead saturated though.  If you upgrade that gpu you’re going to need a b550 or better, or a b660 or better. 

Edited by Bombastinator

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, Bombastinator said:

Uffdah! So 8 4 and 4.  You really really need that Nvidia card to be default.  Still wondering if something or other is assuming the indel dGPU is an iGPU because “intel doesn’t make GPUs so all I have to do is recognize the presence of intel” or something similar.   Bodge SnapBack.  The 2080ti uses just a smidge more than 8 lanes of pcie3 worth of bandwidth, so it’s going to throttle a little.  Not much.  Top slot will be dead saturated though.  If you upgrade that gpu you’re going to need a b550 or better, or a b660 or better. 

The top PCIe lane I have is x8? Looking over the specs on the site it says its I have a pcieX16 and the second lane runs at pcieX8. Am I missing something?

 

Taken from Gigabytes page for the motherboard I have:

Expansion Slots
Integrated in the CPU (PCIEX16/PCIEX8):
  1. AMD Ryzen™ 5000 Series/ 3rd Gen Ryzen™ Processors:
    1 x PCI Express x16 slot, supporting PCIe 4.0 and running at x16 (PCIEX16)
    1 x PCI Express x16 slot, supporting PCIe 4.0 and running at x8 (PCIEX8)
  2. AMD Ryzen™ 5000 G-Series/ Ryzen™ 4000 G-Series/ 2nd Gen Ryzen™ Processors:
    1 x PCI Express x16 slot, supporting PCIe 3.0 and running at x16 (PCIEX16)
    1 x PCI Express x16 slot, supporting PCIe 3.0 and running at x8 (PCIEX8)
    * For optimum performance, if only one PCI Express graphics card is to be installed, be sure to install it in the PCIEX16 slot.
    * The PCIEX8 slot shares bandwidth with the PCIEX16 slot. When using the 3rd Generation AMD Ryzen™ processors/2nd Generation AMD Ryzen™ processors and PCIEX8 slot is populated, the PCIEX16 slot operates at up to x8 mode.
  3. 2nd Generation AMD Ryzen™ with Radeon™ Vega Graphics processors/AMD Ryzen™ with Radeon™ Vega Graphics processors:
    1 x PCI Express x16 slot, supporting PCIe 3.0 and running at x8 (PCIEX16)
Integrated in the Chipset (PCIEX4/PCIEX1):
  1. 1 x PCI Express x16 slot, supporting PCIe 4.0/3.0 and running at x4 (PCIEX4)
  2. 2 x PCI Express x1 slots, supporting PCIe 4.0/3.0

 

I'm running a Ryzen 9 5900x so I thought my top PCIe lane was gen4 running at PCIex16 and not x8.

Link to comment
Share on other sites

Link to post
Share on other sites


Yes. I think so. Because you have 2 GPUs in it one of which is pcie3.    a 2080 is pcie3 not pcie4 (I don’t think Turing has pcie4 but I could be wrong) and x570 will flop between x16 x1 or x8 x2 depending on what gets put in the slots. My understanding is if a pcie3 item gets put in the pcie4 system that whole section gets flipped to pcie3.  If so you’ve got 8 lanes of pcie3 to both GPUs, which won’t bother the 380 (or it’s got 8lanes of pcie4 in which case it’s really really not bothered) and only slightly bothers the 2080ti.  I suppose it’s possible because pcie4 is twice pcie3, that it converts the 8 pcie3 into pcie4 to make 16.  Strikes me as unlikely though.  Might be the way it is.

 

the x570 block diagram.

 

98365C8E-2157-4337-9B3A-D7FB83D5FCD4.jpeg.feb0cdd5733e0ec571ee4560fe0e17a8.jpeg

it might be possible to put the 380 in one of the lower slots to force it into being controlled by the chipset instead so it gets only one lane of pcie4 (which still probably won’t bother it) and your 2080 gets 16 lanes again.  I’m just vaguely poking at stuff though.  I don’t know how x570 chooses which slots get controlled by the cpu.  It might not work.  It also might be that the slot you’ve got covered is the only other slot that goes to the cpu so the 380 is already on the chipset.  I don’t know.  If you had an amphere gpu, or even a 5500, it wouldn’t be an issue, because 8x pcie4 is way more than enough to handle even a 4090.  Looking at your board if it is a hard wired thing getting one of those x1 to x16 adaptors and using that on the a380 to get it on one of those x1 slots may work.  It might also make it impossible for it to do the thing you wanted it to do though.  Again, I don’t know.


In which I  attempt (probably poorly) to explain pcie allocation on ryzen

Spoiler

 CPUs have a given number of pcie lanes.  With intel consumer stuff it’s 16 and a thing with a different name (I forget what that is) something else that connects the cpu to the chipset.  AMD uses 4 lanes to the chipset so they both have 16 lanes to play with.  Enterprise chips commonly have lots more.  This means that for consumer CPUs, if something goes through the cpu and not the chipset it has to be divided by 16 and then allocated.   Even 8 lanes of pcie4 is more than even the most powerful consumer GPUs can use though. So it may not matter.  I looked at a block diagram of b450 when it came out, and according to what the top slot gets all a16 (and perhaps some specific connections)  and everything else is handled by the chipset.  So up to 4 lanes.  With b550 it’s more-or-less the same, except the stuff not to the chipset is pcie4 instead of pcie3.  The connection to the chipset is still pcie3 though.  So b550 is just like b450 except for the top slot (so the video card) and whatever else is attached directly to the chipset. (The block diagram explains what is available) 

 570 (which is what you have) controls up to two slots, not one, and if both those slots are populated it goes x8 and x8.  I don’t know how the slots are chosen.  All the other slots are pcie x1 and connected to the chipset not the cpu.

As an additional wrinkle, Just because a slot has a x16 comb, it doesn’t mean it has that much bandwidth.  On some boards they aren’t even all connected. You can look at the back of the board to see which are actually connected. It’s not always the whole 16.  The 5600g cpu for example (if I understood the explanation GN did which I may not have) is always 8 and 8 of pcie3.  It doesn’t do pcie4 OR do the 1x16 to 2x8 thing. Hard split. It can’t assign more than 8 lanes to the top slot so if you’re not running an x570 those other 8 lanes go poof (I guess) which doesn’t matter as long as your video card is less powerful than a 2080 but has enough active pins (some of the low end AMD cards don’t). My understanding of how that all gets decided and when and how the chipset steps in is vague.

Edited by Bombastinator

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for the explanation @Bombastinator, guess I am still a little confused though with what my motherboard is doing.

 

I got the x570 to future proof, as best I could, for better processors while AMD stuck with the AM4 cpu slot. When I upgraded from my original Ryzen 7 3700x to my now Ryzen 9 5900x I thought I was going to fully take advantage of the pcie4 lanes that are on the x570. 

 

I think I understand that my bottleneck is the 2080 running at pcie3 speeds which would then downgrade the remaining lanes which was fine when it was my only GPU, but now that I put in the a380 I am over taxing the throttled pcie3 lanes as it has to share that between the dual GPU's?

 

Would installing a 3080 or 3090 then flip the pcie lanes back to gen 4 opening up more speed for both cards? 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, invaderSnarf said:

Thanks for the explanation @Bombastinator, guess I am still a little confused though with what my motherboard is doing.

 

I got the x570 to future proof, as best I could, for better processors while AMD stuck with the AM4 cpu slot. When I upgraded from my original Ryzen 7 3700x to my now Ryzen 9 5900x I thought I was going to fully take advantage of the pcie4 lanes that are on the x570. 

 

I think I understand that my bottleneck is the 2080 running at pcie3 speeds which would then downgrade the remaining lanes which was fine when it was my only GPU, but now that I put in the a380 I am over taxing the throttled pcie3 lanes as it has to share that between the dual GPU's?

 

Would installing a 3080 or 3090 then flip the pcie lanes back to gen 4 opening up more speed for both cards? 

Yes.  Any pcie4 video card would do that. They’re even relatively cheap used these days.  2080s still have value.  It’s about as powerful as a 3070ti. You can probably sell one and buy the other for little if any capital outlay.  As long as there aren’t two video cards in a system a 2080ti keeps up fine.
 

Edited by Bombastinator

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/5/2022 at 10:04 AM, invaderSnarf said:

I've currently disabled the Arc GPU in device manager, I'm not in a position to use the OBS 29 beta as I need the ASIO plugin support (which is barely working as is on OBS 28). Hopefully by the time OBS v29 comes out, they will be supporting AV1 so I'll be able to reactivate it and just have to set the GPU by program each time as I don't see Intel putting this as a priority on their drivers. Ideally, windows would let the user define what should be the power saving option and high performance option.

 

But why would they let the user have that much control over their own system?/s

Just want to let you know I opened a support ticket with intel and went through the whole process so hopefully they fix this soon

image.png

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×