Jump to content

necrophagist

Member
  • Posts

    33
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

necrophagist's Achievements

  1. How do I know if it's overheating? I know how to find out if I'm throttling because of my core temperature but what about vram? I mean when I look at people's tests online it's very hard to find someone who monitors or pays attention to anything other than their core gpu temperature (talking about gaming, not mining) and that seems to be all anyone cares about.
  2. It doesn't happen anywhere else. I've had my GPU for 3 years and this is the first time it's refused to boost its clock speed and even downclocked as if it was thermal throttling or something. No other app or game has ever caused it to behave this way. Whew! Thank you very much for explaining. It's surprising how difficult it can be to find info on stuff like this online!
  3. Does this happen to everyone, or? So nothing is wrong with my GPU?
  4. I decided to test furmark for the first time ever today and noticed that as soon as I started the stress test, my RTX 2080 (1515Mhz base clock speed and 1710 boost) only boosted up to 1590Mhz and after a minute downclocked to 1480Mhz which I've literally never seen before. I was monitoring my temps and they were great, 60 degrees and all my thermal limits were at "0", so I'm not overheating. My clock speeds boost normally in games and other stress tests like Heaven or the GPU-Z test (up to 1920Mhz) and as per the nvidia boost algorithm it falls a bit when the temps get toasty, but in furmark the clock never boosts at all, even in the first second of the stress test when the temps are as cool as possible. I'm just wondering if this is normal?
  5. Yeah but this article refers to the new 3080, 3080 ti and 3090 cards that have gddr6x with a vram thermal sensor. My card doesn't have that sensor so hwinfo doesn't show me my vram temperature. That's why I don't understand how it's possible for other programs to show me my vram temp..
  6. Yeah the warranty is long gone, I've had this card for 3 years at this point. A few times my hotspot reached 96 degrees but the thermal limit value was still "no" or "0" in hwinfo/hwmonitor, so I really have no idea what this is. I wish someone could tell me for sure, but oh well. I guess I'll just ignore it as I've had no issues so far. I'm gonna keep on hoping this is a false reading because every 2000 series card I've seen says the hotspot is the same as memory temp but none of the gddr6x cards do. Anyway, thank you very much for replying and trying to help! I appreciate it!
  7. It's a gigabyte windforce 2080, non-oc. I've never experienced any throttling or performance loss and have never worried about this hotspot stuff until now since I've just updated all of those apps and they didn't use to show these readings at all. I don't know how to check if my specific card has a memory sensor. I've looked online but have come up with nothing. Also, if that actually IS the real vram temperature, when would it thermal throttle?
  8. But I thought the 2080 didn't have a memory sensor. I thought only the new GDDR6X had them hence why hwinfo doesn't show it for me but it does for 3080, 3090 owners. That's why I'm confused. For every 2000 series card the memory sensor and the hotspot is the exact same, but for the 3080 and 3090 or AMD cards it's always different, at least from what I've seen. Could this be a false reading?
  9. HWinfo only shows my hotspot temperature, but hwmonitor shows both my memory and my hotspot temperatures as being identical. I have a 2080. What's the difference between them?
  10. I thought the 2000s series didn't have memory sensors? HWmonitor also shows that my memory and hotspot temps are identical (2080) but whenever someone who has a GDDR6X GPU shows their stats, those two numbers are different. I thought maybe this memory reading was a placeholder and simply copied the hotspot temp.
  11. Hello, CPUID Hwmonitor and GPU-Z both show the hotspot and memory (I assume VRAM) as being the same; however, I've heard that they aren't the same. Also, I thought only the new 3000 series cards had VRAM sensors. Can I trust these programs? Is this hotspot temp actually my VRAM temp or not? I know it's normal for the delta between the core temp and the hotspot to be around 15 degrees as it is with me, but I've heard that GDDR6 throttles at 95C, which means that I would throttle when my core temps reach 80 degrees, right? But the throttle temp for the 2080 is 83C, so I don't understand. Is VRAM throttling the same as normal GPU throttling? Will I throttle when my hotspot reaches 95C even if my core temp is below the throttle temp? Thank you. P.S Hwinfo doesn't show my memory temps, only the hotspot temps
  12. I get that, but that doesn't explain why the app says I'm throttling when my temperatures are normal. I just tested it again and it seems to definitely be some type of bug. I ran RDR2 for 2 hours and was not hitting my temp limit, but as soon as I quit the game, the temp limit marker changed from "0" to "1" which makes zero sense.
  13. Hello! I was playing around yesterday and wanted to test my temps in Star Wars Jedi Fallen Order by unlocking the frame rate. I usually play with vsync on because I can't stand screen tearing but this time I decided to play without it. I've been monitoring my temps for years just in case but yesterday in CPUID HWmonitor I saw that the max value for the gpu temp limit was "1", which means that, allegedly, at some point the GPU hit its temp limit, which my BIOS says is 83 degrees (RTX 2080). However, my maximum temp readings were 72 degrees core temp and 86 degrees hotspot temp. How can that be the temperature limit? Correct me if I'm wrong but I think it's very common for most GPUs to go into the 70s under heavy load. This started happening after I updated cpuid hwmonitor after two years of not updating it and the only new thing I've noticed is that they've added temp readings for the hotspot. This made me believe that maybe my hotspot was too hot; however, I tested the game again and played for a much longer period of time than I previously had and the temperatures were obviously higher (77 core; 92 hotspot) but the max value for the gpu temp limit was at "0" this time, which means that the GPU did not hit its temp limit at any point during that gaming session. How can it be throttling at a lower temperature (72) but it's not throttling when the temperature is considerably higher (77)? None of this makes sense to me. The same thing happened a month ago with the same game. I've tested other games and have yet to run into this peculiar issue. Does anyone have any opinion on this? It's so puzzling and I don't know what to do. I thought that 72 degrees was pretty decent but the program thinks otherwise. Also I'm not experiencing any slowdowns or major performance drops or stutters or anything like that so I really have no idea what it could be; I've chalked it up to false positives, but I really have no idea.. Even when my temps were much higher in the past when I hadn't cleaned my computer for months, I still didn't hit the gpu temp limit then (My max temp was 82 back then). If this is not a false reading, then I guess I have to accept it because I see no way to keep my GPU's temperature under 70 unless I pour liquid nitrogen on it or something.
  14. Is it normal for gpu voltage to not be locked during heavy loads? I have an RTX 2080 and my max voltage is 1.050v stock, but it often hovers around 1.044v or 1.038v and goes down to 0.9, etc depending on the game and the load but it absolutely never stays locked from start to finish. Just checking if this is normal behavior. Thanks.
  15. Well the clocks are as stable as can be with this nvidia gpu boost thingy making them fluctuate a bit depending on how hot the gpu is but yeah, I think I don't have an issue with that. My temps reach 78 degrees at worst and even though that's toasty it's well below the 83 degree default temp limit the 2080 has so I'm not throttling. I'm worried that I might be missing out on a lot of performance if I'm not hitting the voltage limit or that something else might be wrong. I don't know why but I always thought every GPU had to reach its voltage limit under heavy load.
×