Jump to content

LooneyJuice

Member
  • Posts

    682
  • Joined

  • Last visited

Everything posted by LooneyJuice

  1. Well there's many "ah-hah!"s in that statement. For one, if the guy before ran it hard, chances are, he may have pumped quite a bit more extra voltage, especially on water, which is a bigger fatigue factor than just upping clock speeds. Second, you don't know if he did stuff like constant voltage, killing zero core, and generally anything that would throttle the card when not under full load (which again, shorten the lifespan by quite a bit, even though overclocking in general does not really cost you that much). Additionally, blower-style reference coolers are pretty poor, yes, regardless of vendor. Visiontek merely put their sticker on those, they're made by AMD basically. So rounding up high temps, some potential beefy overvoltage by the prior owner (with some extra conditions) and the combination of a reference board/reference cooler may have contributed to some degradation. I could be talking out of my ass, but if you underclock it and run the same scenarios, you'll probably gain some stability at the detriment of performance, because if it was really a terminal issue, you'd have seen those colored screens very few times.
  2. Well, it's within operational parameters, but not the type of parameters I'd ever want to be running a Tahiti core at. Because joking about AMD burning stuff aside, Southern Islands weren't yet brute-forcing 28nm, and were actually pretty power efficient, and cool in comparison to their later counterparts. My 7850s ran at 65 deg at max load, even in crossfire, and machines I built with 7970s rarely exceeded that either. What's your card's vendor? Is it a reference cooler or aftermarket?
  3. I meant underclocked even compared to stock clock speeds, it's a way you can test for degradation. Also, if that's your actual temp during desktop loads, that's seriously warm. I usually feel uncomfortable when I climb into the 70s under full load. You should probably check your max load temps. If it's been climbing into the 100s regularly, it's a pretty surefire way to cause some serious degradation/failure. Got any OSD software to check while running a pretty intensive game?
  4. Yeah that does sort of look like a sign that the GPU is on its last legs, or at least a sign of some degradation. Have you tried underclocking it a bit and testing stability? Noticed any weird colored blocks when the MB posts and you get the MSI splash screen? EDIT: Also, idling at 60-70C is kinda cooking tbh. 7970s depending on vendor should idle at around the 40C mark.
  5. Canadians are going to get boned, as is tradition. *cough* HTC Vive *cough* I jest of course.
  6. I hope, I'm sort of suspicious this time around because this is a first for this pricing scheme, but we'll see.
  7. Oh I think they look pretty cool as well, even though I'm not always partial to aesthetic builds, but it's just got way too much of a premium on it for poorer performance. (Assuming of course AIB Partner cards will be in the MSRP range, and not overshoot the Founder's)
  8. Oh I love these types of threads GeForce 2 MX400, followed by its GeForce 4 replacement. Those were my first proper GPUs in my own systems. Although I had many close encounters with the Riva TNT and the occasional Diamond Stealth when I was a toddler. I still remember Soldier of Fortune as if it was yesterday... ...sobs
  9. Would not even touch a Founder's edition card until AIB partner boards are out. Especially given how, once again, the blower cooler is thermal throttling to bits. Also something doesn't add up... LGA 1150 CPU on an LGA 1155 MB?
  10. Yeah makes sense, and it really depends on a lot of things like hardware config, drivers, other software running in the background. Additionally, yesterday I switched application detection completely off, haven't seen it once. Could be that, could be random. Out of just sheer aching curiosity I just ran a session on The Division with my safe overclock, which is still more than a lot of samples (1525mhz core/8000mhz memory) just to stress it a little more and see whether it would trigger anything. Nada, zip, nothing. So this could be a bit of a false alarm. I just hope that if it shits itself, it does it under warranty. Thanks a lot for the input though, much appreciated!
  11. I will agree to this, but to an extent. There are definitely certain issues and analogies that need to be taken into account as well though. As is stated above, it's more of a case of AMD supporting especially older products better than NV, rather than NV nerfing theirs. I will also attest to the fact that even my old 7850s did benefit from recent driver releases, even though they're at the very tail-end of the current support spectrum. I personally had to go for a 970 out of necessity (EDIT: Instead of the Sapphire Nitro R9 390 rev 2 I was planning on buying) after the failure of one of my 2 7850s, due to my PSU. The R9 390s base performance is better than most 970s, and the frame buffer is insane, not to mention the greater bandwidth, but the R9 390 has a greater TDP than even the 980ti, and we're not even close to the same price range or performance. Don't care about perf/watt, but my poor old PSU had to settle for a 970, because it's the most I wanted to risk. On the other hand though, most results don't factor in OC potential, which nowadays has such a negligible lifespan cost, cards will still outlast their usefulness, so it's a selling point. Case in point, Sapphire R9 390 rev2 vs my own G1 970 which overclocked was about 5% faster than even the 390s max OC on a lot of scenarios and synthetic benchmarks. Not to mention it didn't even dent my temps. Additionally though, this coincides with AMD's driver release recovery after an abysmal three quarters of 2015. After the release of the Omega drivers (14.12) their release schedule/support was so poor, especially in many of my scenarios, that I was considering grabbing an NV card just to alleviate a massive array of issues. I won't even mention Xfire, but I put the blame squarely on myself for even believing I could live with multi-gpu. So, basically R9 200s weren't abysmal, but were definitely not up to par with NV stuff due to the usual issues with thermals, driver support and all that. the R9 300s came along, (which I regard as more of a refresh than a rebrand) which admittedly due to higher clockspeeds and slightly more OC potential, kinda proved that there was at least some streamlining or fabrication refinements. That also coincided with the formation of the Radeon Technologies Group as a much needed acknowledgement of AMD's - up until then - poor support for their desktop graphics products, which promptly revised their dated Catalyst platform, and came out with the Crimsons. And teething issues aside with the first couple of releases, they have been pumping out releases with every batch of game releases since last autumn, have provided excellent support for the most part, and even before the death of one of my 7850s (which tbh I regarded as a legacy product regardless of the explicit support), I greatly benefited in a lot of titles by way of performance and stability, and even better crossfire support which I never thought I'd say. So, TL;DR, the premise of this thread is true, but it's more in favor of AMD instead of against NV. BUT, there have been many changes in the industry meanwhile that have to be taken into account, all this is still subject to change. Although AMD hands down has better support for older products, I'll give them that much, and even though I have 0 loyalty to any brand (it's not like loyalty ever gave you free GPUs), I have to give them a cookie for that one.
  12. The scene complexity variance in TW3 is huge. Honestly, I wouldn't even trust 2 different benchmarks on the same machine if they didn't take place during the same scene, weather conditions, time of day etc. I'm on a G1 970 at 1080p, and for 90% of the time I'm stuck at 60 (maxed besides foliage distance and no hairworks). But then I walk into a dense windy forest chasing a bear and I get down into the 40s. So in order to maintain 60fps at 1080 in that scene (and have a bit of a margin of a couple of frames on top) you'd need a good 50-55% bump in power. Depending on the 980ti, that's the sort of bump over an aftermarket 970 you're expecting to see. So, TL;DR for 1080p yes you can maintain 60 maxed, for 1440p, probably not.
  13. I haven't underclocked the card manually, but I have noticed it once or twice in Assetto Corsa again when the usage was low enough while racing for GPU boost to throttle down a bit. So I saw it even at say 1178mhz (this card's base clock) and not just 1392 (this card's actual boost clock). Again I have gone on long runs of benchmarks/games with no effect whatsoever, and suddenly i'll see this one red little line on the corner of my screen. One thing I'm investigating though is the chance that I'm getting the little artifact as a result of a momentary hitch when the MSI AB (RTSS) OSD is being fired up while detecting the application.
  14. Hello Dudes/Dudettes First off, GPU is a G1 Gaming 970 on 365.19. I want to ask if anyone's encountered this. Very rarely, even when I'm browsing on chrome, or steam, I get these weird momentary elongated red flashes/lines for a split second on one side of the screen, and they're extremely rare. Sometimes it happens when a video is playing, sometimes when I'm scrolling, almost completely randomly, very rarely, and it's for an instant. I cannot for the life of me find any kind of documentation online other than connector issues (which I've checked thoroughly and reconnected last night, and the pages I found were related to rolling red lines everywhere, which I don't have), which is the reason I'm asking. The card's a couple of months old, and aside from a 2/3 day period during which I was testing OC profiles for absolute stability, I've been running stock for the most part, so it hasn't been tortured or anything, stock bios, no silly overvoltage. I've done long sessions of stuff like Metro Last light, Doom, The Division and such, uncapped framerate for testing and extra stress, nothing, no artifacting on Heaven, Valley or Firestrike, nothing. The only instances during which I've seen those tiny red flashing lines are during desktop loads and Assetto Corsa (fairly intensive driving game, but not say like The Witcher 3 or something like that). Again connectors are checked, stock clocks. The only couple of reasons I can think of, either likely or unlikely are as follows: 1. Dropped frame/momentary stutter without Vsync maybe exposed a bit of a previous frame (Sort of grasping at straws, but the only 3D application i've seen it in is Assetto Corsa) 2. Failing power strip, inconsistent power delivery (although PSU appears fine, voltage droop is minimal under load, even for an old TT PSU) 3. Interference due to some electrical appliance/switch nearby switching on/drawing more power 4. Last but not least (and I sure as f*ck hope that's not the case) very slowly dying VRAM I'm mainly asking if anyone's encountered this and if so, possibly what proved to be the case. I can find nothing else pertaining to this issue. Many thanks to anyone chiming in.
  15. I can sort of justify it in that sense I guess. If you've tailored your usage around specific pieces of software (say f.ex AAA titles that are better SLI candidates) and you're trying to hit a framerate target and give yourself some headroom, then I guess it sort of makes sense. On the other hand though, we're talking about this near Pascal availability, and even a single 1080 pips a 980ti, although admittedly not even close to the difference to necessitate another upgrade now. If you can hold on for maybe a 1080ti / GP100, I'd wager that's a better option, even if you're going to mainly be playing AAA titles. Because as we've seen, many of them these days are a messy bag of crap occasionally, which far from cater to more niche tech like SLI. Again, I'm thinking more bulletproof hassle-free operation regardless of scenario, rather than peak performance in fewer cases for the immediate future.
  16. Did you OC both memory and core? Admittedly the gains depend on the software too. Also, SLI is another bag of cats entirely. It seems like a good idea at the time, but a lot of the multi-GPU rendering implementation is up to the software devs and not Nvidia, and when it doesn't work, it's wasted hardware. In fact mentioned this in another thread so: Additionally, no, you very rarely get anywhere near twice the performance. If you're lucky you can maybe expect a 70-80% bump, but that's about it. Synthetic benchmarks will always show better scaling too than actual games. So, if you want to do it just to push consistently past 144 on a few of your titles, it's a question of whether those titles even support SLI. In any case, I wouldn't recommend it as SLI is very temperamental. Other than that though, it's your budget, and if you want to just grab a second one, by all means.
  17. Yeah it doesn't sound like much, granted, but if say, you gain a 10% bump, and you're every in a pickle with a game due to performance, getting even a 10% bump means that a 55 FPS becomes a 60 or a marginal 60 keeps you well above that. That's my point of doing it. Others go for pure benchmark scores, or the knowledge that you've extracted every ounce of performance. And 10% is usually average ballpark gain on Maxwell (assuming of course you're not already on a very highly overclocked edition like an EVGA FTW f.ex). But in any case, that's the gist of it, you saw how it works, you went through the methodology, if and when you ever need the extra boost it's there. Nonetheless, you're still on a flagship product essentially, and even with the advent of Pascal, you're still going to be riding up there, so yeah, it's not like you're hurting
  18. First off, I'm not trying to rain on your parade, but there are plenty of warnings out there, so I'm only going to reiterate what's out there. SLI (or xfire for that matter) can be a great way to increase performance when you want to add another GPU of the same type to your system, and it sounds like a great idea, but only if it works and scales properly. When it doesn't, that's when you have diminishing returns, and for any amount of time you can't run the second GPU, you basically have a very expensive heating unit in your case. The most expensive single GPU you can buy at any time is always going to be the better, hassle free option, because you'll have an overall more powerful single GPU that runs at full capacity rather than a more "mediocre" single GPU running out of 2 in SLI when the game does not support it. So, yes, to answer your question, SLI isn't supported in as many games as you thought, unfortunately. It's a very niche market according to numbers, and most devs prefer to code for 1 GPU. And there are many other caveats to being an SLI setup owner. That being said, you already pulled the trigger, so you can only make the best of it, and when it does work, you can see the performance gain. So, for starters, as stated above, it really works on a game by game basis. For one, you now have to stay on top of driver releases which may contain SLI profiles. Additionally, you'll have to sometimes do some snooping around to see whether a game implicitly or explicitly supports SLI. If the former is true, sometimes you can get some performance out of the SLI AFR (Alternate Frame Rendering) submodes, or you can even have the second GPU as a dedicated PhysX processor. It's not hard to do, SLI rendering mode is in one of your screenies even. If said game explicitly supports SLI, then it's all supposed to work out of the box. Other than that, you don't have to worry about other quality settings in there unless there's a specific problem with a game. SLI will not change basic functionality. If you just don't have the horsepower for it, you just tweak in-game settings as usual. Also How are you monitoring VRAM usage? If it's via MSI AB (or RTSS for that matter) when in SLI/Xfire, depending on the title, it might report double the VRAM usage because of the 2 cards. Even though you still only have 4GB effective. So basically, if AB reports 4.2GB of VRAM usage for example, in all actuality you're using about 2.1 basically. So the gist of it is: 1. Stay on top of driver releases 2. Check which games support SLI 3. If games do not have explicit support, play with SLI submodes in your game profile and observe performance 4. Worst case scenario, some games will be so bad with it on, you'll have to switch it off completely via game profile Note: I hope you also have the motherboard for it, because if i recall correctly SLI unlike Xfire does not want to work on anything lower than a x8 pcie slot. Some budget boards really kill that second slot. And because I haven't done SLI for a while, and my prior multi-GPU setup was utilizing XFire, if anyone else spots something I've missed, please chime in.
  19. If that was on stock voltage, that's it, that's as far as it'll go. Also, if you deem you're at a stable enough clock speed, you have to loop something like heaven for a good half hour to even have a rough idea of how stable it is. You then have to note at what core clock speed you start crashing at so you can return to that point on higher voltage if you want to. You basically add voltage until you're stable, and you don't get any artifacting where you used to, which may in turn give you a bit more headroom to push a little more. Also, I have to stress that thanks to GPU boost, voltage goes up in steps, so in my case, my max stock voltage is 1.225v, and the max I can ever do with a manual offset is 1.250, without giving me anything in between. So in my case, it'll accept only any offset of +25mV and above. Use either the OSD or the Voltage monitoring in something like MSI AB/EVGA Precision to see what your voltage is doing. Disclaimer: It's excess voltage that wears down your card, but depending on the version you have, you may have very little to play with anyway, so the card will still outlive its usefulness. Nonetheless, playing with it is entirely your responsibility.
  20. No real loyalty to be honest. Kind of depends on the situation. Yes, EVGA has great support and warranty, and they don't do the whole "It's an OC card, but woe betide you if you kill it while overclocking". They have occasionally somewhat botched it though. Like anyone I guess. Case in point, 970, I'd go with Gigabyte there, every time. For higher end stuff, yes, probably EVGA, since say a 980ti is more of an investment than a 970 or a 980. So, mid range: Gigabyte High end/flagship: EVGA/Gigabyte Although if the support wasn't tipping the scales, I'd probably go for GB construction every time. I honestly cannot fault their cooling and construction quality. Aside for maybe some aesthetic builds clashing. Both bin chips well enough though, and overclocking is always a good selling point for free performance.
  21. Honestly, there's many ways to do this. No clear consensus, I'll chime in, then two others will probably chime in with a different method. For starters, with Maxwell cards, gains from memory overclocking can be pretty freakin' substantial, so if you wanna eek out as much as possible, you do both. The story doesn't really end here though. Software: Don't know how familiar you are, but there's a few pieces of software you can use. The more obvious choices are usually MSI AB (my favorite) and EVGA Precision. Additionally there's OC Guru II by GB for their cards. Methods: Note: Regardless of method, it's always best to initially find your core limit on stock voltage, and then ramp up to alleviate instability/artifacts. 1. Some will say initially OC the core to instability (in +10 or +20 increments) and then back down (again -10/20) until you're stable, with stock memory. They then follow up with memory OC until instability/artifacting with stock core clock (with an initial bump of maybe +200mhz since memory can be overclocked significantly higher and then continue in 50/25mhz increments), and again back down until the effects/instability subside completely. They then test both and back down incrementally (maybe -5 core/-25 mem)since most of the time you can't run max OC on both due to either voltage, power shortage or just instability. 2. Others say OC core first until you find your max stable clock speed, and then you start pushing your memory (same increments). I tend to usually go with this one depending on the board/cooler. The reason is that, although the gains from memory overclocking on Maxwell can be pretty damn big, most of the heavy duty cooling and manual voltage control are focused on the core, whereas cooling for memory may be sub par depending on the cooling solution. So I do it for peace of mind, even knowing how resilient components are nowadays. Additionally, this method helps you eek out some extra mhz out of your core which has a better FPS per clock ratio, whereas an extremely silly memory OC may cause general instability and the need to back down both clock Core and Memory speeds. Case in point my G1 970. My Results: I came up with 2 OC presets to test this. One preset was 1505 core boost / 8430 mem (yes, very lucky with the memory sample, not even artifacting at +710 mhz). Heaven AVG framerate for 1080p extreme was 68.5. The second preset was 1525 core boost / 8310 mem, identical heaven score. And the story remained the same as I moved the balance over to the core. Which gave me more peace of mind due to the core cooling. Same deal with 2x MSI 980s I was OCing for a friend's build. Granted, they are GM204 not GM200, but the theory is the same. Again, as a lot of people will say, cannot stress enough how big of a role the silicon lottery plays. Some get silly stable Core overclocks, some get silly Memory overclocks, some even get both. You have to do your own testing to determine your sample's limit. Also bare in mind, on stock GPU bios, you have limited voltage control, regardless of OC software, so it's virtually impossible to fry your hardware. Same goes for damage from overclocking, instability will kick in sooner than damage. Doesn't mean you should go crazy though, hence the methodology. Testing: Honestly, there's no fool-proof benchmark to test instability. You have to test on different software and be very observant. Sometimes you'll get queues before instability, like artifacts, and sometimes you'll get no warning before a crash. Occasionally there's instances where you may be artifacting, but your benchmark will keep looping for ages. Case in point, testing in Heaven, I was letting it run for half an hour. Ran fine, no crashes, but on one of the loops I noticed that as it was going past that copper domed tower with the round windows, I started seeing small blue flashes/errors on the glass as I flew past, backing down the core clock stopped that completely. Therefore, again, you have to be very observant and patient. Marginally unstable core clocks may manifest themselves as missing polygons and rendering errors, marginal or unstable memory overclocks can manifest as momentary flashes, "snow", red stripes flashing across the screen, or even changes in coloration. You can usually get a pretty good warning before crashes, grey screens, black screens etc. Point is, the most time-consuming aspect of any overclocking process, is testing. I tend to use the typical synthetic benchmarks to test initial stability like Heaven or Valley (I avoid firestrike for anything other than scores since it cannot be looped to test stability), and follow up with overtaxed game benchmarks like Metro Last Light benchmark, long sessions in The Division, some Witcher 3, GTA V, and sometimes even lighter titles which can expose instability due to GPU boost causing some instability at a certain voltage step. I tend to avoid stuff like Furmark as it's a completely unrealistic load that also taxes your VRM a lot. All this essentially to validate that you have a usable overclock. I personally find no use in an overclock that can barely hang on to a Firestrike run just for the score. It has to be a usable every day benchmark. Hope this wall of text answers some questions about how to go about overclocking your 980ti.
  22. How have you checked both cards individually? Additionally, it could be a busted crossfire bridge.
  23. Also, regarding your settings. Occasionally, GPU boost can cause instability even when not at max boost clock due to marginal voltage, momentary droop and all that. I'm not quite sure what the voltage limit on that specific card is, but the +87mV voltage offset limit is essentially never +87mV, at best, it's your max voltage allowed by the bios. In my case (G1 970) it just goes to the max of 1.250, a mere +25mV from max observed voltage under load (1.225V). Same applies to say, the MSI 980, and most 900 series cards I've had to deal with, +/- a few mV depending on the chip. Additionally, they may only go up in increments too. Mine was either 1.225V or 1.250, nothing in between. The only way you get to overvolt it past limits is by using a custom bios like Imakuni mentioned. So lack of voltage (your offset not registering) may be the culprit you may wish to investigate after of course you've checked everything else. Edit: Reminder, MSI AB monitors voltage as well.
  24. Cannot stress how much overclockability is luck of the draw. Also, this. There's only so far you can go. EVGA advertises their binning like Gigabyte for example, but higher up, it basically means just this. It's already factory overclocked and basically guarantees the factory overclock. But because of it, there's not much more after that. Also, similar to this. Either try minimal OC or stock clocks to guarantee the stability issue has nothing to do with the OC. Additionally, I don't know how you go about doing things, but core alone may have limit A and memory alone may have limit B, doesn't mean you can necessarily use both max values while maintaining stability, you have to start backing down. Edit: Additionally, stating this for clarity, different applications will stress your card differently, typical example being Valley may not crash an OC that runs on Heaven and takes a dive within 5 scenes. I aim for stability under every single scenario, otherwise, in my opinion, you don't have a stable OC.
  25. If you aren't already, use MSI Afterburner to monitor PC usage (VRAM usage etc). Massive stutters during what seems to be very consistent framerate is usually an indication that you're out of VRAM unless there's another issue(Yes, even with your 6GB, it is possible, and GTAV with the advanced options on is a memory hog, not to mention the Memory usage bar is unreliable). If that's the case, you'll have to keep turning some of the advanced stuff down (or other options) until you drop below 6GB with a bit of a safe margin for all scenarios.
×