Jump to content

2080 Ti loss of 30-50% performance

alienwar9

So my GPU was running fine for about a year, or at least it seems. I am unsure exactly when performance dropped significantly. I benched and OC'ed it when I got it and it ran as expected, but I have been running it and my CPU without OC for most of the time (too many other problems going on). I was playing Division 2 at 1440p max settings in HDR, and only getting around 60fps (have a 144hz monitor). That seemed out of place, so I ran Superposition with 1080p extreme and only got a 4600 score. I believe my original scores were 8000+, and others with the same card have gotten 120fps averages in Division 2 with the same settings. Superposition shows 1600-1700mhz clocks while running.

 

Got a 2080 Ti Fe with a waterblock and custom loop triple rad, also cooling a mobo monoblock, running on a DDC pump. The GPU is vertical mounted with a riser cable, and custom wired with cables I made myself. I've fresh re-installed drivers, nothing physically has changed inside the PC prior to the dropped performance. The temps immediately rise up to and max out at 57C, idle at 28C. I've set MSI Afterburner to reset to default.

 

I've called my PC Sisyphus cause of all the problems I've run into with it, pushing areas I don't have much experience on. I'm at a loss as to what has caused this issue, or where to take the diagnosis process next. I'm planning on taking the whole thing apart, but that's a pain and I hope it's something more easy to fix.

 

Could the riser cable go bad? Could the wiring not be supplying enough power? Could the thermal pads somehow have shifted on the GPU waterblock, or a VRM/something have fried? I don't even know how to check any of these things apart from taking everything apart.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, alienwar9 said:

So my GPU was running fine for about a year, or at least it seems. I am unsure exactly when performance dropped significantly. I benched and OC'ed it when I got it and it ran as expected, but I have been running it and my CPU without OC for most of the time (too many other problems going on). I was playing Division 2 at 1440p max settings in HDR, and only getting around 60fps (have a 144hz monitor). That seemed out of place, so I ran Superposition with 1080p extreme and only got a 4600 score. I believe my original scores were 8000+, and others with the same card have gotten 120fps averages in Division 2 with the same settings. Superposition shows 1600-1700mhz clocks while running.

 

Got a 2080 Ti Fe with a waterblock and custom loop triple rad, also cooling a mobo monoblock, running on a DDC pump. The GPU is vertical mounted with a riser cable, and custom wired with cables I made myself. I've fresh re-installed drivers, nothing physically has changed inside the PC prior to the dropped performance. The temps immediately rise up to and max out at 57C, idle at 28C. I've set MSI Afterburner to reset to default.

 

I've called my PC Sisyphus cause of all the problems I've run into with it, pushing areas I don't have much experience on. I'm at a loss as to what has caused this issue, or where to take the diagnosis process next. I'm planning on taking the whole thing apart, but that's a pain and I hope it's something more easy to fix.

 

Could the riser cable go bad? Could the wiring not be supplying enough power? Could the thermal pads somehow have shifted on the GPU waterblock, or a VRM/something have fried? I don't even know how to check any of these things apart from taking everything apart.

Take a screenshot of Superposition in the middle of the run, also what's your CPU?

Quote or Tag people so they know that you've replied.

Link to comment
Share on other sites

Link to post
Share on other sites

Max settings is always a horrible idea. By far one of the stupidest things to do no matter the hardware. 

 

Your GPU Die temps are fine, its can be either VRM temps if they are not being cooled properly, or just power. You can see in afterburner if its a power limitation. Put the power slider at max, see if there is any difference. Then see if you were overclocked before, test things like that.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, _Syn_ said:

Take a screenshot of Superposition in the middle of the run, also what's your CPU?

CPU is Ryzen 2700 (I know. I had it half a year or more before I got the 2080 Ti that I wasn't planning on getting in the first place. Planning on upgrading when I redo my setup). Quick OC it to 4.1Ghz, but otherwise run it around 3.4-3.8.

 

 

Unigine  Superposition Screenshot 2019.11.25 - 20.34.04.67.png

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Shimejii said:

Max settings is always a horrible idea. By far one of the stupidest things to do no matter the hardware. 

 

Your GPU Die temps are fine, its can be either VRM temps if they are not being cooled properly, or just power. You can see in afterburner if its a power limitation. Put the power slider at max, see if there is any difference. Then see if you were overclocked before, test things like that.

I was just seeing what it might run at. I had a few settings turned down when normally playing, including AA. It was just easier to compare with others on max/ultra, since I didn't have to checklist each and every setting.

 

When putting the power slider to max, I got 4400 on Superposition. But Sp has varied quite a bit, going from 4300 on the low end to 4900 on the high end. Doing OC scanner or manual OC seems to actually get lower results, but I'm also not being terribly stringent on background tasks (iCue might have been running during a test, for example). If I have more time I'll try to do more controlled testing, try and isolate some variables.

 

What gets me is that it has been running fine for a year, so something like VRMs randomly failing to be cooled feels out of nowhere. But I'm not 100% confident on my waterblock installation skills, so sure....maybe a thermal pad slowly squeezed its way out? ?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, alienwar9 said:

CPU is Ryzen 2700 (I know. I had it half a year or more before I got the 2080 Ti that I wasn't planning on getting in the first place. Planning on upgrading when I redo my setup). Quick OC it to 4.1Ghz, but otherwise run it around 3.4-3.8.

The 2700 is alright as long as your expectations of it are reasonable, but this isn't a CPU bound scenario, performance is terrible but everything else looks fine.. it's weird

Try downloading GPU-Z and click on the "?" icon next to the "Bus Interface", that will test your PCIe slot to check what speed it's running at, it should say "@ x16 3.0" or at least "@ x8 3.0" when you start the render test
https://www.techpowerup.com/download/techpowerup-gpu-z/

 

You can also go to the Sensors tab in GPU-Z and it will list all the sensors on the card, if it's possible to run Superposition in windowed mode do that and take a screenshot of GPU-Z while it's running, and expand it so it shows all the sensors at once

Quote or Tag people so they know that you've replied.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, _Syn_ said:

The 2700 is alright as long as your expectations of it are reasonable, but this isn't a CPU bound scenario, performance is terrible but everything else looks fine.. it's weird

Try downloading GPU-Z and click on the "?" icon next to the "Bus Interface", that will test your PCIe slot to check what speed it's running at, it should say "@ x16 3.0" or at least "@ x8 3.0" when you start the render test
https://www.techpowerup.com/download/techpowerup-gpu-z/

 

You can also go to the Sensors tab in GPU-Z and it will list all the sensors on the card, if it's possible to run Superposition in windowed mode do that and take a screenshot of GPU-Z while it's running, and expand it so it shows all the sensors at once

So the Bus interface seems to be fine (@ x16 3.0). The sensor results, not so much. Looks like a power limit, running at 60% TDP, and only 158W.

 

gpu-z ss.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, alienwar9 said:

So the Bus interface seems to be fine (@ x16 3.0). The sensor results, not so much. Looks like a power limit, running at 60% TDP.

Do you have GPU overclocking software on your PC? I suggest you remove all of them and use DDU to reinstall drivers, OC software can mess with the registries, it doesn't seem like there's anything wrong with your GPU, but you can double check GPU VRM temperatures using HWiNFO, just click on "sensors-only" when you first open it

 

Should look like this
gaSbQEi.png
 


DDU:
https://www.guru3d.com/files-details/display-driver-uninstaller-download.html

 

HWiNFO:
https://www.hwinfo.com/download       
 

Quote or Tag people so they know that you've replied.

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, _Syn_ said:

Do you have GPU overclocking software on your PC? I suggest you remove all of them and use DDU to reinstall drivers, OC software can mess with the registries, it doesn't seem like there's anything wrong with your GPU, but you can double check GPU VRM temperatures using HWiNFO, just click on "sensors-only" when you first open it

 

Should look like this
gaSbQEi.png
 


DDU:
https://www.guru3d.com/files-details/display-driver-uninstaller-download.html

 

HWiNFO:
https://www.hwinfo.com/download       
 

Uninstalled all OC software, ran DDU, still the same results. I can't seem to find those sensors in HWiNFO. Is there some setting to change?

 

(Also, I think HWiNFO is conflicting with iCUE, as when it runs all my fans spin up full blast. Honestly, iCUE has been a neverending shitfest.)

HWiNFO ss.png

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, alienwar9 said:

(Also, I think HWiNFO is conflicting with iCUE, as when it runs all my fans spin up full blast. Honestly, iCUE has been a neverending shitfest.)

You mean case fans? I don't know if iCUE has any control over GPU Power/Clocks

3 minutes ago, alienwar9 said:

Uninstalled all OC software, ran DDU, still the same results. I can't seem to find those sensors in HWiNFO. Is there some setting to change?

GPU VRM's should be in a different section, the ITE IT89xxxx.. you see in the previous image is the VRM controller, I don't think you have to change anything?

 

Quote or Tag people so they know that you've replied.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, alienwar9 said:

Uninstalled all OC software, ran DDU, still the same results. I can't seem to find those sensors in HWiNFO. Is there some setting to change?

It seems like not all RTX 2080 Ti GPU's have VRM temp sensors, I thought all of them had it

 

But anyway it's unlikely for the VRM's to be overheating, the whole system would turn off, I'm confused on what the issue is

Quote or Tag people so they know that you've replied.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, alienwar9 said:

So the Bus interface seems to be fine (@ x16 3.0). The sensor results, not so much. Looks like a power limit, running at 60% TDP, and only 158W.

 

gpu-z ss.jpg

1680 core clock is very low

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

Your 2080 ti does not seem to be getting the power.

 

With My FTW3 Ultra the power target(limit) can get stuck at 66% and this can survive a reboot. Fortunately it does not survive switching the computer off then on again. 

When this first happened I thought it was the CPU causing it since I could not overclock it and the GPU at the same time. The problem I thought went away with a new CPU but came back 10 months later. I did not think it was the PSU because it ran two 1080 tis with no issues before the 2080 ti.

 

Both times this occurred I was doing benches with multiple monitoring/overclocking software. 

I also tested the card in another computer and it had no issues.

 

To test if you are having this type of issue use MSI Afterburner or Precision X1 to see if your power target/limit is stuck. 

 

6 minutes ago, _Syn_ said:

It seems like not all RTX 2080 Ti GPU's have VRM temp sensors, I thought all of them had it

 

But anyway it's unlikely for the VRM's to be overheating, the whole system would turn off, I'm confused on what the issue is

With test that I have done with my EVGA FTW3 Ultra the components that get the hottest are the vram between the VRM and the GPU. they can get 20c hotter than the GPU and 30c hotter than the VRM. 

 

Fortunately my overclock of + 800 on the ram does not add to this heat. 

 

 

 

 

 

RIG#1 CPU: AMD, R 7 5800x3D| Motherboard: X570 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3200 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX500 2.5" 2TB | Monitor: ASUS ROG Swift PG42UQ

 

RIG#2 CPU: Intel i9 11900k | Motherboard: Z590 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3600 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1300 G+ | Case: Lian Li O11 Dynamic EVO | Cooler: Noctua NH-D15 | SSD#1: SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX300 2.5" 1TB | Monitor: LG 55" 4k C1 OLED TV

 

RIG#3 CPU: Intel i9 10900kf | Motherboard: Z490 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 4000 | GPU: MSI Gaming X Trio 3090 | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Crucial P1 1TB | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

 

RIG#4 CPU: Intel i9 13900k | Motherboard: AORUS Z790 Master | RAM: Corsair Dominator RGB 32GB DDR5 6200 | GPU: Zotac Amp Extreme 4090  | PSU: EVGA 1000 G+ | Case: Streacom BC1.1S | Cooler: EK 360mm AIO | SSD: Corsair MP600 1TB  | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, xg32 said:

1680 core clock is very low

Clock is only 8-10% lower than a normal stock 2080Ti but he's getting 50-60% performance loss comparing the AVG FPS to this, the score difference is even higher

 

Quote or Tag people so they know that you've replied.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, jones177 said:

Your 2080 ti does not seem to be getting the power.

 

With My FTW3 Ultra the power target(limit) can get stuck at 66% and this can survive a reboot. Fortunately it does not survive switching the computer off then on again. 

When this first happened I thought it was the CPU causing it since I could not overclock it and the GPU at the same time. The problem I thought went away with a new CPU but came back 10 months later. I did not think it was the PSU because it ran two 1080 tis with no issues before the 2080 ti.

 

Both times this occurred I was doing benches with multiple monitoring/overclocking software. 

I also tested the card in another computer and it had no issues.

 

To test if you are having this type of issue use MSI Afterburner or Precision X1 to see if your power target/limit is stuck. 

 

With test that I have done with my EVGA FTW3 Ultra the components that get the hottest are the vram between the VRM and the GPU. they can get 20c hotter than the GPU and 30c hotter than the VRM. 

 

Fortunately my overclock of + 800 on the ram does not add to this heat. 

 

 

 

 

 

When I run MSI Afterburner, turning power limit all the way up to 123, and temp limit to 88C, I end up getting a lower score (4300) than when running default (4900). TDP is still at 60%, 150W power.

 

...

 

Also, it seems 57C is 10-20C hotter than it should be running without OC and without CPU OC, with a CPU concurrently at 25C on the same exact water loop running right before the GPU (and that idles at 28C, so a 4C delta). I have a triple rad setup, 6 120mm fans on 2 rads plus a 240x60mm beefy rad (HWLabs), all fans on full blast, ambient temps in the room around 20C (Corsair LLs and HDs).

 

So it seems like the issue is not just power, but also temp.

 

The only other discussion I found with similar results was here: https://www.techpowerup.com/forums/threads/weird-throttling-issue-rtx-2080-ti.252138/

 

But the OP ended up just getting a new card. I guess I am still within the 3 year warranty period...I really don't want to go through that.

 

...

 

I guess I can do a full takedown and try the stock cooler with stock cables. See if that changes anything. Maybe plop the card in my other PC.

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, alienwar9 said:

When I run MSI Afterburner, turning power limit all the way up to 123, and temp limit to 88C, I end up getting a lower score (4300) than when running default (4900). TDP is still at 60%, 150W power.

 

...

 

Also, it seems 57C is 10-20C hotter than it should be running without OC and without CPU OC, with a CPU concurrently at 25C on the same exact water loop running right before the GPU (and that idles at 28C, so a 4C delta). I have a triple rad setup, 6 120mm fans on 2 rads plus a 240x60mm beefy rad (HWLabs), all fans on full blast, ambient temps in the room around 20C (Corsair LLs and HDs).

 

So it seems like the issue is not just power, but also temp.

 

The only other discussion I found with similar results was here: https://www.techpowerup.com/forums/threads/weird-throttling-issue-rtx-2080-ti.252138/

 

But the OP ended up just getting a new card. I guess I am still within the 3 year warranty period...I really don't want to go through that.

 

...

 

I guess I can do a full takedown and try the stock cooler with stock cables. See if that changes anything. Maybe plop the card in my other PC.

Thanks for the link. 

 

It is not the power used(watts)and it is not the power limit.  It is the power% used. It has its own graph.

As you can see the power % graph jumps around a lot during a bench but when I get stuck at 66% it is a flat line like it is when the GPU is at idle.

XCstock3.jpg.611f0d3fffaf83c88db8b48e83a3f2cd.jpg

Here is the same GPU with an overclock and the power limit turned up.

XCoverclocked2.jpg.db5b924ffd8a399feb5f60c048bca5c2.jpg

 

I have had lots of strange issues with my 2080 tis that did not happen my 1080tis or 980tis. 

 

Before doing an RMA test the card in another system. My FTW3 Ultra would have been RMAed but I put it in my computer that had the XC 2080 ti first and it ran perfectly.

My hardware likes to make a fool out of me any chance it can get.

 

 

RIG#1 CPU: AMD, R 7 5800x3D| Motherboard: X570 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3200 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX500 2.5" 2TB | Monitor: ASUS ROG Swift PG42UQ

 

RIG#2 CPU: Intel i9 11900k | Motherboard: Z590 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3600 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1300 G+ | Case: Lian Li O11 Dynamic EVO | Cooler: Noctua NH-D15 | SSD#1: SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX300 2.5" 1TB | Monitor: LG 55" 4k C1 OLED TV

 

RIG#3 CPU: Intel i9 10900kf | Motherboard: Z490 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 4000 | GPU: MSI Gaming X Trio 3090 | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Crucial P1 1TB | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

 

RIG#4 CPU: Intel i9 13900k | Motherboard: AORUS Z790 Master | RAM: Corsair Dominator RGB 32GB DDR5 6200 | GPU: Zotac Amp Extreme 4090  | PSU: EVGA 1000 G+ | Case: Streacom BC1.1S | Cooler: EK 360mm AIO | SSD: Corsair MP600 1TB  | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

Link to comment
Share on other sites

Link to post
Share on other sites

my 2080ti aircooled and overclocked runs at 59C yours seems a bit hot. Also as mentioned the frequency is way down you should be like 2000MHz+ if OC'd. Maybe look into your PSU perhaps that could be indicating that the GPU is not getting enough power.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, jones177 said:

Thanks for the link. 

 

It is not the power used(watts)and it is not the power limit.  It is the power% used. It has its own graph.

As you can see the power % graph jumps around a lot during a bench but when I get stuck at 66% it is a flat line like it is when the GPU is at idle.

XCstock3.jpg.611f0d3fffaf83c88db8b48e83a3f2cd.jpg

Here is the same GPU with an overclock and the power limit turned up.

XCoverclocked2.jpg.db5b924ffd8a399feb5f60c048bca5c2.jpg

 

I have had lots of strange issues with my 2080 tis that did not happen my 1080tis or 980tis. 

 

Before doing an RMA test the card in another system. My FTW3 Ultra would have been RMAed but I put it in my computer that had the XC 2080 ti first and it ran perfectly.

My hardware likes to make a fool out of me any chance it can get.

 

 

I'm waiting for a friend to test out in his rig, as my alt PC is so thoroughly stripped that I might have to melt some case parts to get it back together. 

2 hours ago, Dean.P. said:

my 2080ti aircooled and overclocked runs at 59C yours seems a bit hot. Also as mentioned the frequency is way down you should be like 2000MHz+ if OC'd. Maybe look into your PSU perhaps that could be indicating that the GPU is not getting enough power.

Tried to pull the PSU from my old PC, but it has 2x6 pin connectors instead of 8, and I only have 1 adapter cable. I'll have to wait. That's my first test. If that doesn't work, I'll have to drain the loop and swap the air cooler back on. 

 

This PC is a nightmare to drain, though, and that's with a drain port at the lowest point. I have to cartwheel the PC to get it all out. 

Link to comment
Share on other sites

Link to post
Share on other sites

An update on testing. Managed to pull the card out and test it on my older PC with its original air cooler. Worked perfectly.

 

Then tested it with the air cooler on my current PC. Worked perfectly. Tested it with the PCI-e riser cable. Worked perfectly. Then plopped a different waterblock I had for it; did a quick loop. Worked perfectly. 

 

So I'm not entirely sure what was even wrong with it. Could have been the waterblock got unseated somehow, but that's unlikely given the number of screws tightly clamping it together, and the fact that it didn't go past 57C. 

 

Could have been the riser cable somehow unseated partially, though that doesn't explain the momentary 100% power usage (same as putting to question the possibility of the custom power cables being at fault). 

 

I honestly have no idea what went wrong. But I'm glad it is somehow fixed. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×