Jump to content

Hot News? More Like... HWInfo64 Now Shows Hot VRAM for RTX 3080/3090!

  • 2 weeks later...

Wow! I get busy and don’t visit the board in a few months and go figure, something I posted about long ago has become relevant again!

 

I ended up very luckily getting an EVGA 3080 XC3 Ultra from Best Buy in November. I immediately undervolted it to 1860mhz at 850mv, which helped with power consumption and temps in general, but remained suspicious of the potential VRAM temps. With this issue still on my mind I started checking the temps with HWInfo64 as soon as it was enabled and sure enough, they went as high as 104* under intense load! While this may be “okay” by design I was not thrilled about temps like that long term.

 

Unlike ASUS and MSI, EVGA (along with Gigabyte and some others) do not put thermal pads on the backside of the PCB between the PCB and backplate. Awful cheap practice for a $800 piece of hardware. I ended up ordering some Gelid 2mm thermal pads and carefully cut them to shape and installed them on the backside of the memory modules. Now my VRAM temps peak at about 94* under load, and take significantly longer to get that hot, a satisfactory improvement for the cost of only a thermal pad (though I am disappointed in EVGA for skimping on this in the first place).

 

Overall I am happy with the 3080 and the performance is incredible but if you have one of the cards that cheaps out and doesn’t leverage the backplate for additional cooling, at least do yourself a favor and install some thermal pads, they do help.

Current build: AMD Ryzen 7 5800X, ASUS PRIME X570-Pro, EVGA RTX 3080 XC3 Ultra, G.Skill 2x16GB 3600C16 DDR4, Samsung 980 Pro 1TB, Sabrent Rocket 1TB, Corsair RM750x, Scythe Mugen 5 Rev. B, Phanteks Enthoo Pro M, LG 27GL83A-B

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, FaxedForward said:

I ended up ordering some Gelid 2mm thermal pads and carefully cut them to shape and installed them on the backside of the memory modules. Now my VRAM temps peak at about 94* under load,

This suggests most of the heat is transferred through the soldering points into the PCB. Maybe it's time to integrate an IHS into VRAM modules. Thermal conductivity of plastic is no longer up to the task.

Link to comment
Share on other sites

Link to post
Share on other sites

Hi,

 

After reading this discussion and several articles on the topic I have 2 questions that I feel are related. After recent research I realize that Furmark's only benefit for modern cards is a VRM stress test which is considered by many to be a heat / power virus. I just thought it was a "stress" test for the card to make sure I didn't get a lemon.

My FE RTX 3090 in my machine seems to run quiet enough. Get some occasional coil whine but usually in certain loading screens or game menus (guessing its because the framerate can be insanely high on those).

 

I ran Furmark (latest version for about 30-40 min) a day after getting the card installed. GPU temps looked fine. Did the Standard Burn In test option in Furmark not Extreme Burn in option. This is the default settings. Just thought "checking for stability of the card" at stock clocks / stock everything; no changes. 

 

- No errors or issues or artifacts detected that I could tell in this app or any other. Then I read about VRM temp potentially being at 104 C in RTX ray tracing gaming applications and to check via HW Info latest version. Sure enough ran the test today for another 10 min and the temp hovered around 102C to 104C.

 

Q: What is the latest consensus on Furmark? My system is stock as stock gets. 10900K enforce all intel limits. The RTX 3090 is running basically as is inserted in the system. Just installed the card and the drivers; that's it. Is what I did safe? Stupid? I always thought this was a safe stress test especially because a modern card / system would throttle down if a thermal or power limit was reached on any aspect of the card. Doing some research it sounds like 104C isn't unheard of for these GDDR6X modules despite the chip running relatively chilly at 68-72C. Please advise. Any damage I may have caused? See second point.

 

Q2: I run game benchmarks to determine how a card is performing. I upgraded from a EVGA 1080 TI SC2. I noticed on some past screen captures the behavior below but to a lesser degree. When I first installed the card I think I ran each benchmark 1x or 2x - can't remember. Shadow of Tomb Raider averages 157 fps on first run on 3090. Which it did before running Furmark for the first time. The other game I ran back to back benchmarks was on was Far Cry 5. It ran an average of 148 fsp on it's first run.

 

Here's what I've noticed on subsequent benchmark runs in each game; but especially Far Cry 5 or Far Cry New Dawn. With each subsequent benchmark run the average frame rate drops. Tomb Raider usually has small variances. Far Cry went from 148 to 144 to 143 to 141. Minimums changed slightly too. But - each subsequent benchmark the rendered frames total that shows is less each time. As an example Far Cry 5 rendered frames per run were - 8744, then 8499 then 8416 then 8299. So the avg framerate is dropping but so are the rendered frames...What do I make of this? Does this have anything to do with Furmark that I ran? Is the card despite being stable throttling a bit or being inconsistent because of it or is this TOTALLY NORMAL.

 

Thank you in advance for reading all this. I have been building systems for a long time and frame time consistency is important and I hope I didn't mess anything up or fry / damage some aspect of my card. I didn't even think about VRM temp being an issue let alone that app being considered a power virus. Furry donut of death does sound scary goodness. I remember the Xbox 360 red ring of death was something you never wanted to see lol.  I figured the card would be smart enough regardless of application to throttle appropriately. I think my 1080TI SC2 while running noisier is a bit better behaved consistency wise. Maybe its a driver issue?

 

When I run the 1080 TI SC2 and run the Far Cry 5 benchmark and then repeat the benchmark over and over again the average FPS score stays the same and the frames rendered go up slightly with every pass.

 

When I do the same thing with the 3090 FE the average drops with each pass as do the rendered frames - ~100 or sometimes more per run unless I do like 10 runs and then those drops level out more but are still there.

Here's where it gets interesting...If I close the Far Cry 5 application (after running all those subsequent benchmark passes) and I'm in the desktop and then immediately launch Far Cry 5 again... and then run the benchmark immediately...again...I will get a benchmark score identical to the first pass I ran the last time - I got 8740 frames and 148 average. I was expecting to continue getting lower and lower results despite relaunching the game (because in my mind the card, drivers or throttling of some kind was happening.) This is repeatable... So what does that mean...memory leak? Is it a cooling issue because I opened the side panel of my case and that didn't change these results.

 

System:

 

10900K (Stock) Enforce All Intel Limits w/ Noctua D15S

Asus XII Hero MB

2x32 GB (64GB) DDR4 3600 Ram C18

Nvidia FE 3090 RTX

2 TB Adata 8200 Pro NvME

Corsair AX1600i Titanium PSU
Phanteks P600S Case (3 Front A14 Noctua fans and 1 A14 as rear exhaust.

 

Will be thrilled if any of you fellow 3090 owners (Founders Edition or AIB version cards) can repeat this. I did this with the latest drivers 461.41.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Aroyoc said:

Hi,

 

After reading this discussion and several articles on the topic I have 2 questions that I feel are related. After recent research I realize that Furmark's only benefit for modern cards is a VRM stress test which is considered by many to be a heat / power virus. I just thought it was a "stress" test for the card to make sure I didn't get a lemon.

My FE RTX 3090 in my machine seems to run quiet enough. Get some occasional coil whine but usually in certain loading screens or game menus (guessing its because the framerate can be insanely high on those).

 

I ran Furmark (latest version for about 30-40 min) a day after getting the card installed. GPU temps looked fine. Did the Standard Burn In test option in Furmark not Extreme Burn in option. This is the default settings. Just thought "checking for stability of the card" at stock clocks / stock everything; no changes. 

 

- No errors or issues or artifacts detected that I could tell in this app or any other. Then I read about VRM temp potentially being at 104 C in RTX ray tracing gaming applications and to check via HW Info latest version. Sure enough ran the test today for another 10 min and the temp hovered around 102C to 104C.

 

Q: What is the latest consensus on Furmark? My system is stock as stock gets. 10900K enforce all intel limits. The RTX 3090 is running basically as is inserted in the system. Just installed the card and the drivers; that's it. Is what I did safe? Stupid? I always thought this was a safe stress test especially because a modern card / system would throttle down if a thermal or power limit was reached on any aspect of the card. Doing some research it sounds like 104C isn't unheard of for these GDDR6X modules despite the chip running relatively chilly at 68-72C. Please advise. Any damage I may have caused? See second point.

 

Q2: I run game benchmarks to determine how a card is performing. I upgraded from a EVGA 1080 TI SC2. I noticed on some past screen captures the behavior below but to a lesser degree. When I first installed the card I think I ran each benchmark 1x or 2x - can't remember. Shadow of Tomb Raider averages 157 fps on first run on 3090. Which it did before running Furmark for the first time. The other game I ran back to back benchmarks was on was Far Cry 5. It ran an average of 148 fsp on it's first run.

 

Here's what I've noticed on subsequent benchmark runs in each game; but especially Far Cry 5 or Far Cry New Dawn. With each subsequent benchmark run the average frame rate drops. Tomb Raider usually has small variances. Far Cry went from 148 to 144 to 143 to 141. Minimums changed slightly too. But - each subsequent benchmark the rendered frames total that shows is less each time. As an example Far Cry 5 rendered frames per run were - 8744, then 8499 then 8416 then 8299. So the avg framerate is dropping but so are the rendered frames...What do I make of this? Does this have anything to do with Furmark that I ran? Is the card despite being stable throttling a bit or being inconsistent because of it or is this TOTALLY NORMAL.

 

Thank you in advance for reading all this. I have been building systems for a long time and frame time consistency is important and I hope I didn't mess anything up or fry / damage some aspect of my card. I didn't even think about VRM temp being an issue let alone that app being considered a power virus. Furry donut of death does sound scary goodness. I remember the Xbox 360 red ring of death was something you never wanted to see lol.  I figured the card would be smart enough regardless of application to throttle appropriately. I think my 1080TI SC2 while running noisier is a bit better behaved consistency wise. Maybe its a driver issue?

 

When I run the 1080 TI SC2 and run the Far Cry 5 benchmark and then repeat the benchmark over and over again the average FPS score stays the same and the frames rendered go up slightly with every pass.

 

When I do the same thing with the 3090 FE the average drops with each pass as do the rendered frames - ~100 or sometimes more per run unless I do like 10 runs and then those drops level out more but are still there.

Here's where it gets interesting...If I close the Far Cry 5 application (after running all those subsequent benchmark passes) and I'm in the desktop and then immediately launch Far Cry 5 again... and then run the benchmark immediately...again...I will get a benchmark score identical to the first pass I ran the last time - I got 8740 frames and 148 average. I was expecting to continue getting lower and lower results despite relaunching the game (because in my mind the card, drivers or throttling of some kind was happening.) This is repeatable... So what does that mean...memory leak? Is it a cooling issue because I opened the side panel of my case and that didn't change these results.

 

System:

 

10900K (Stock) Enforce All Intel Limits w/ Noctua D15S

Asus XII Hero MB

2x32 GB (64GB) DDR4 3600 Ram C18

Nvidia FE 3090 RTX

2 TB Adata 8200 Pro NvME

Corsair AX1600i Titanium PSU
Phanteks P600S Case (3 Front A14 Noctua fans and 1 A14 as rear exhaust.

 

Will be thrilled if any of you fellow 3090 owners (Founders Edition or AIB version cards) can repeat this. I did this with the latest drivers 461.41.

Interesting, it does sound like it's throttling slightly under sustained gaming load

Or it could be that the GPU is boosting less as the card gets warmer

 

In my case, I have a lower GPU power limit, so my cooler didn't have to work as hard to cool the VRAM, but it'll still be around 106-110 on mining workload.

 

I'm still waiting on maybe gamers nexus to pick up on this, but I suspect that VRAM running hot is common practice in the past, just that we can read the sensor now (on G6X, any VRAM before this didn't have junction temp for VRAM), so it's becoming known

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Moonzy said:

we can read the sensor now (on G6X, any VRAM before this didn't have junction temp for VRAM)

FWIW, navi lets you read the memory junction temp. The max T_case temp on the memory on the 5700xt and the 3080 is 95C. The max junction for the 5700xt is 105C, while the 3080 is 110C.

https://linustechtips.com/topic/1299399-hot-news-more-like-hwinfo64-now-shows-hot-vram-for-rtx-30803090/?do=findComment&comment=14444605

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

Almost every workload under BOINC would, these are all compute workloads that are dependent on memory performance either latency or bandwidth.

on this note, do you or anyone you know uses 3080 or 3090 for distributed computing and face similar VRAM throttling issues?

i expect it to be more of an issue to folding/BOINC since the power limit is normally not set as low as mining workloads, thus the cooler have to cool the extra heat generated as well

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Moonzy said:

on this note, do you or anyone you know uses 3080 or 3090 for distributed computing and face similar VRAM throttling issues?

i expect it to be more of an issue to folding/BOINC since the power limit is normally not set as low as mining workloads, thus the cooler have to cool the extra heat generated as well

Heh I don't personally know anyone with an RTX 30 series GPU yet lol

Link to comment
Share on other sites

Link to post
Share on other sites

Dear Linus followers,

I have a bit of something to add to this topic, since I care about the lifespan of my cards and do a bit of mining at the same time i have modified the VRAM cooling a bit.

I'm sitting on 68% Power limit now, -200 core, +1150 memory. Cards are Gigabyte RTX 3080 gaming OC 10Gb. Temps after 9 hrs of mining are shown on the picture. One card is a bit warmer due to positioning. Core temps are higher since more heat is beeing transferred through the same cooling system from the VRAM chips. Happy tweaking.

 

1206635937_ObrazekJPEG29.jpeg

RTX3080.jpeg

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Richieost said:

Dear Linus followers,

I have a bit of something to add to this topic, since I care about the lifespan of my cards and do a bit of mining at the same time i have modified the VRAM cooling a bit.

I'm sitting on 68% Power limit now, -200 core, +1150 memory. Cards are Gigabyte RTX 3080 gaming OC 10Gb. Temps after 9 hrs of mining are shown on the picture. One card is a bit warmer due to positioning. Core temps are higher since more heat is beeing transferred through the same cooling system from the VRAM chips. Happy tweaking.

 

1206635937_ObrazekJPEG29.jpeg

RTX3080.jpeg

 

Interesting, you used copper shims to fill in the gap between the thermal pads and the heatsink. Any thermal paste required?

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/29/2021 at 6:55 PM, linuxChips2600 said:

Should we start a separate thread where we compile a list of various RTX 3080/3090s' which do NOT thermal throttle even while maxing out VRAM usage?

Actually, after taking a break from WWW use and looking back at this post, I realized that we probably don't need to start a separate thread because not only it'd involve more or less just two SKUs that most people don't have enough money to buy anyway (for the time being at least), but also because I need to focus on my own well being a lot more.  I will still drop into this forum to check on stuff and comment from time to time, but I realized that I may have had an unhealthy obsession with LTT before I took my break.

Also the search functionality for this forum is good enough that if someone really wanted to find something, I trust that the person will be able to find it by searching for it.

Link to comment
Share on other sites

Link to post
Share on other sites

I have used 14x40mm and 14x50mm pads made of 1mm copper plate. Arctic 0,5 pads from both sides. If I’d have a 1,5 mm plate then maybe 0,5 pad + paste from the other side. It’s important to check the core clearances first with temporary paste. I have moved the original pads to the factory marked spots on the backplate so i can revert the changes later if needed. With the arctic pads from both sides of the copper it’s ensured that the copper won’t move and short any of the resistors close to the VRAM. With using 1,5 mm plate and paste from one side the core will be heated even more.

since the vram is in recommended temp. range now which is 0-95 degrees celsius, I’m fine with this solution.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 months later...
  • 1 month later...

Just to add more info to this subject: 

I also have a 3090, ROG STRIX model. I have undervolted my 3090(875mV), causing the stock fan curve("Quiet"-BIOS) to average around 1425RPM. At this RPM, the core itself is around 75c in heavy gaming sessions, and the memory Tjunction temp goes up and down from 96/98c - never shows 97c for some reason. 

Before I used the "Quiet"-BIOS, memory Tjunction temp just stayed at 96c and the average fan speed was around 1650RPM.

It seems like many are getting very different results. Kinda weird. 

PC Setup: 

HYTE Y60 White/Black + Custom ColdZero ventilation sidepanel

Intel Core i7-10700K + Corsair Hydro Series H100x

G.SKILL TridentZ RGB 32GB (F4-3600C16Q-32GTZR)

ASUS ROG STRIX RTX 3080Ti OC LC

ASUS ROG STRIX Z490-G GAMING (Wi-Fi)

Samsung EVO Plus 1TB

Samsung EVO Plus 1TB

Crucial MX500 2TB

Crucial MX300 1.TB

Corsair HX1200i

 

Peripherals: 

Samsung Odyssey Neo G9 G95NC 57"

Samsung Odyssey Neo G7 32"

ASUS ROG Harpe Ace Aim Lab Edition Wireless

ASUS ROG Claymore II Wireless

ASUS ROG Sheath BLK LTD'

Corsair SP2500

Beyerdynamic TYGR 300R + FiiO K7 DAC/AMP

RØDE VideoMic II + Elgato WAVE Mic Arm

 

Racing SIM Setup: 

Sim-Lab GT1 EVO Sim Racing Cockpit + Sim-Lab GT1 EVO Single Screen holder

Svive Racing D1 Seat

Samsung Odyssey G9 49"

Simagic Alpha Mini

Simagic GT4 (Dual Clutch)

CSL Elite Pedals V2

Logitech K400 Plus

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, BetteBalterZen said:

Tjunction temp goes up and down from 96/98c - never shows 97c for some reason. 

The step size is two for all my G6x cards

 

9 hours ago, BetteBalterZen said:

seems like many are getting very different results. Kinda weird. 

Here's some of mine, roughly under the same condition and ambient temp

 

Under full VRAM load:

3080 EVGA ftw3 = ~100c

3x 3080 trinity oc white/black = ~100c

3090 gigabyte gaming OC = 110c and throttle to 275W out of 350W (fail)

3090 MSI gaming x trio = 106-108c, no throttle

3090 GALAX hall of Fame = 106-108c, no throttle

 

The gigabyte cards I changed the thermal pad and the temps gone down to 92c at stock, 106c at full overclock, the stock thermal pad is awful, both front and back of the card

 

Almost all cards are +1500 in memory clock, except two cards that aren't capable of it, namely zotac black and ftw3 3080, both roughly get +1300, zotac is unstable and lowers performance due to memory correction, but EVGA card just crashes the whole system

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Moonzy said:

The step size is two for all my G6x cards

 

Here's some of mine, roughly under the same condition and ambient temp

 

Under full VRAM load:

3080 EVGA ftw3 = ~100c

4x 3080 trinity oc white/black = ~100c

3090 gigabyte gaming OC = 110c and throttle to 275W out of 350W (fail)

3090 MSI gaming x trio = 106-108c, no throttle

3090 GALAX hall of Fame = 106-108c, no throttle

 

The gigabyte cards I changed the thermal pad and the temps gone down to 92c at stock, 106c at full overclock, the stock thermal pad is awful, both front and back of the card

I guess ASUS did an okay job with the thermal pads then? 

PC Setup: 

HYTE Y60 White/Black + Custom ColdZero ventilation sidepanel

Intel Core i7-10700K + Corsair Hydro Series H100x

G.SKILL TridentZ RGB 32GB (F4-3600C16Q-32GTZR)

ASUS ROG STRIX RTX 3080Ti OC LC

ASUS ROG STRIX Z490-G GAMING (Wi-Fi)

Samsung EVO Plus 1TB

Samsung EVO Plus 1TB

Crucial MX500 2TB

Crucial MX300 1.TB

Corsair HX1200i

 

Peripherals: 

Samsung Odyssey Neo G9 G95NC 57"

Samsung Odyssey Neo G7 32"

ASUS ROG Harpe Ace Aim Lab Edition Wireless

ASUS ROG Claymore II Wireless

ASUS ROG Sheath BLK LTD'

Corsair SP2500

Beyerdynamic TYGR 300R + FiiO K7 DAC/AMP

RØDE VideoMic II + Elgato WAVE Mic Arm

 

Racing SIM Setup: 

Sim-Lab GT1 EVO Sim Racing Cockpit + Sim-Lab GT1 EVO Single Screen holder

Svive Racing D1 Seat

Samsung Odyssey G9 49"

Simagic Alpha Mini

Simagic GT4 (Dual Clutch)

CSL Elite Pedals V2

Logitech K400 Plus

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, BetteBalterZen said:

I guess ASUS did an okay job with the thermal pads then? 

If you're under gaming load then 96c is to be expected

 

I'm pushing the VRAM to the limit by mining ETH, which utilise the VRAM to the max

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Nacht said:

Imagine burning your fingers on the backplate on rtx 3090 expecting it to be cold

Expect it to be warm but not 70c+ warm xd

My gigabyte card was 44c before I changed the thermal pad, after the change it was 73c

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×