Jump to content

Falkentyne

Member
  • Posts

    1,860
  • Joined

  • Last visited

Everything posted by Falkentyne

  1. First uninstall the Nvidia drivers from control panel / programs and features. Then, download Wagnard's Display Driver Uninstaller (DDU), then reboot to safe mode. Uninstall GPU drivers with Wagnard's Display Driver Uninstaller. Run this in safe mode with your internet unplugged. Go back to windows, download newest drivers, install normally.
  2. So far the cards confirmed safe are the Founder's Editions, Kingpin, ROG Strix and Galax HOF cards. The only confirmed safe eVGA FTW3 cards are the revision 1.0 PCB's as they have a new voltage controller and different power stages (no more Alpha and Omega trash). The cards that are most likely to die all seem to be using AOZ power stages.
  3. TDP Normalized has nothing to do with TDP. It's too complicated for me to explain. I've explained it multiple times on the Nvidia reddit, techpowerup and overclock.net and elsewhere. TDP Normalized can cause a power throttle but normalized does NOT respond to values below 100% with respect to throttling (it only responds to 100% and a normalized value higher than 100% with how far the TDP slider itself goes past 100%! (which goes up to the BIOS "maximum" limits, depending on what they are coded to be)
  4. System specs please? 100hz monitor? Who the heck has a 100hz monitor in 2021?
  5. I mean they've had DDR5 engineering samples for awhile. How do you think the AIB's are even able to design boards without memory? Stuff takes tons of work and constant bug reports and feedback to the chip manufacturers, everyone under top secret NDA. But DDR5 was only available in final form recently and it's in very early form, just like release DDR3 and DDR4...
  6. This stuff has been going on for years. Premium memberships are not new. We made a stick of this when Dell started offering it and what good did it do? Either deal with it or don't buy.
  7. How is military stuff "Classified" when the military is CIVILIAN RUN to begin with? Just post all the specs on wikipedia so we can make our own tanks, please. Cars are so last century...
  8. Shunt mod. Note all the input power rails (except total board power) should be multiplied by 1.98x. I was too lazy to add it in hwinfo.
  9. Not even close. eVGA 3090 FTW3's and even some 3080 FTW3's were dying like flies (ok exaggeration but too many of them were dying) in *Halo MCC*, GTA5 and League of Legends, and I think there was one other. There were at least four people who had their cards (fuse pop/smoke or red LED light of death) playing Halo and a bunch more from the other games. Old, extremely low load games. This was in a bunch of threads on their forums scattered around. You were more likely to kill your card in these older games than in any newer high stress game--until New World came along. And I'm going to get some hate for saying this, but Amazon isn't wrong in their statement either. If I can hammer my 3090 FE (shunt modded) with 600W and 320 amps on the GPU Core voltage rail and it doesn't die, if a 3x8 pin card dies in New World, well..it isn't Amazon's fault.
  10. If I can throw 600W TDP onto a 3090 FE and not blow anything up, I don't think it has anything to fear from New World.
  11. You can RMA it. If you're comfortable, you can disassemble it and post a picture of the thermal pad and thermal paste on the CORE side of the card, both PCB and heatsink or in two pictures. That hotspot being that high is almost 100% a missing, damaged or too thin VRM (not VRAM, but VRM) thermal pad causing it. However it could also be part of the core has bad thermal paste or balance. The hard thing is that "Core Hotspot" is the hottest of the hotspot areas on *BOTH* the core *AND* VRM's so its often difficult to determine what's causing that. But at 350W power draw and 77C...it could be either. Your call on what you want to do. If you are not comfortable disassembling the card, just RMA it.
  12. Oh you're right. I meant double the IPC or close to it. not 50%. I remember now. I had a Pentium Northwood (EE, because I was stupid and wasted extremely expensive amounts of money back then) at 3.7 ghz and compared it to my Core 2 duo (before I wasted even MORE amounts of money on a Core 2 QX9650 extreme edition--OH DID I TELL YOU MY CORE 2 DUO WAS AN X6800 EXTREME EDITION TOO? If i had that money back, I could have just bought a car...), and in a 100% CPU limited game back then, my FPS was almost double. I think I remember I had to set my core 2 duo to 2.0 ghz, which sort of matched the Pentium 4 @ 3.7 ghz in FPS. I forgot what I was running. Maybe it was some sort of SNES emulator, I don't remember anymore...But yes you're right.
  13. ADL is an evolution of RKL combined with hybrid cores. Conroe (Core 2) was ~50% IPC improvement over pentium 4 northwood (I believe Conroe was the evolution of core, on "Pentium M" originally, I forgot what it was back then, Meron or something. RKL is 19% IPC over any Skylake core (CML), and ADL is about the same over RKL. So if you ignore that RKL existed, then yes that's a big jump. But the ever increasing amount of "Internal parity errors" on overclocked CML chips (and even on some STOCK chips) in games makes some people VERY happy to have RKL's stability, even with all the Gearing issues with the new memory controller, which suffered due to it not being designed directly for the RKL backport.
  14. It will be very competitive. Haven't you seen the leaks? Or are you talking about power consumption? Gaming performance seems to be under as much NDA as USA's nuclear codes. It's obvious the techtubers already know but can't say anything until the embargo lifts.
  15. You can do this in two "games" already. 1) Path of Exile with GI/Shadows quality=Ultra (GI Quality=high makes it behave more like a normal game). 2) GPU-Z fullscreen graphics test. 3) Metro:Exodus main menu is inbetween these two (this behaves more like a normal game), this one hits Normalized limits at about 570W POE is the worst offender on GI:Ultra. POE will trigger a TDP Normalized power throttle on the NVVDD and MSVDD rails even if you are nowhere close to the TDP (power limit) on cards that can run at >500W TDP or shunt mods. Even at 400W TDP, you can see the clocks drop by -100 mhz at the same 400W vs GI/Shadows = High at 400W, but then you're hitting regular TDP, not normalized on the NVVDD voltage power rails. GPU-Z will not do this but will allow more than 550W and won't throttle at all on TDP Normalized, but will throttle on TDP% if your cards' power limit is set too low, as it puts a very low load on rasterization (if your card can exceed 550W), so sort of the opposite of POE. But GPU-Z puts a higher load on the 8 pin rails (they will have worse power balance than if you were testing with POE GI:Ultra).. Anyone who can pass both of these at 550W should have no problem with New World.
  16. Many 4k TV's can downscale natively from 4k to 1080p without affecting the quality (besides larger pixels) because the ratio remains exactly the same. Try that and see if it works.
  17. Did you swap screws around and determine if it's the hole that's stripped? If the screw that wasn't 'connecting' works in another hole, then the hole's done for. Sorry, that's the limit of my knowledge. I know nothing about machining.
  18. Problem with these failures is 1) did it take out some of the PCB traces, turning it into a mountain rather than a digital highway when the magic smoke came out? If it did, might as well toss the card (sell it on ebay for parts or not working, with the heatsink--you might get a few quid for it). 2) finding out WHAT made the VRM explode to begin with, whether the VRM itself just decided to (Q)uit life itself, or whether you have a short to ground somewhere else (which could mean even if you replaced the components, the same thing would happen again unless you find precisely what caused the failure. If it was bad solder on the VRM well that's different. I would look for LHR 3060's or (even better) 3060 Ti's or otherwise go for a 3090 FE. These are the easiest cards to get (people don't like paying for a 3090, and people don't like the performance of a 3060 12 GB). People HAVE been getting more cards lately.
  19. Max supported temp is 93C on the core before shutdown. Hotspot temp may affect fans, but neither hotspot nor memory junction temp was even available as a temp until Martin and Wizzard (CPUZ) found the NVAPI calls to monitor it. 13C core temp delta from core to hotspot could either mean you need new thermal paste, bad or overheating VRM thermal pads (this affects the hotspot also) or could be the stock delta. Impossible to know without seeing other results from 3060 Ti's, as the hotspot has an absolute minimum delta that will register above the core, depending on SKU.
  20. I think you mean the term is "stripped"? What is stripped exactly? The top of the screw? The screw's threads? Or the PCIE bracket that the screw actually screws into on the case itself? If it's one of the first two, just use a different screw from a different section of the case. If it's the slot bracket hole itself, well.....how the hell did you strip THAT? That's almost impossible to strip unless you overturn like an obsessive OCD person. The only way that can strip is if the the screw is actually installed at an angle, incorrectly, instead of straight down. Installing at an angle is almost always user error, from not (very gently) tilting the video card at the back to make sure the hole is aligned perfectly. (On some cases, it's slightly misaligned at stock, then all you have to do is make sure the screw goes directly into the threads as it was machined).
  21. The fix for the internal parity error is enough reason to switch. Needing 20 threads instead of 16 threads (10 vs 8 physical cores) for gaming is like someone saying they need a RTX 3090 because 3080 10 GB isn't enough RAM... There's been a whole lot of discussion about that error in the 6 months since RKL launched, and a lot more people have encountered it in more and more newer titles, sometimes having to completely remove their overclocks to stop it.. One person did a test over on OCN and determined that if you disabled 2 cores on the 10900k (turning it into a 9900k, ahem), it greatly reduced or eliminated the parity error (except in Minecraft, which is a worst case example of cache based Java garbage collection at work, as it loads one full thread at 100% constantly). That's proof that Intel stretched the Skylake ring arch to its breaking point. Yes the 10900k clocks on average better than the 9900k but that's to be expected from any process refinement as silicon purity increases. The challenge is getting the memory latency low enough to compensate for the MC (memory controller) changes.
  22. Rocket Lake was the first non skylake CPU (that's why there was an IPC increase as well as fixes to prevent the "Internal Parity Error" encountered in quite a few recent games (and legendary in Minecraft). It just got a bad rap because it used a chpset that was a stopgap between existing compatibility with Skylake cores (10900k) and Cypress Cove (basically, Ice Lake running on 14nm). This caused a latency penalty due to the Gear 2 mode for DDR4 (in Gear 1 mode it was almost as low in latency as Comet Lake). ADL is going to be very interesting. But I wonder what the benchmark results are going to be if you disable all the atom cores and compare it to Rocket Lake at the same mhz, with the huge latency penalty of early DDR5? (i'm guessing from the leaks that DDR4 on both systems will still be 19% faster). I guess we'll find out in just over a month or so...
  23. It's sort of hilarious. "640k ought to be enough for anybody" is far different than "640k is all you'll ever need". The first speaks in the present tense, meaning what programs and applications are available at the time. And is more pointed towards the 8088's processor memory limitations and that with respect to IBM's "home computer" competitors, the Apple II and the Commodore 64 (both limited to 64k). The Apple II/e I think could go up to 128k, I forgot, similar to the Commodore 128 (this system actually used two chips with a hardware "Mux" switch for C64 mode). I mean think about it. It's like trying to say, for GAMING use: "you need 128 GB of (system) RAM to play Doom Eternal".
×