Jump to content

Astyanax

Member
  • Posts

    34
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Astyanax's Achievements

  1. just a heads up, the most recent refresh that changed the nand type (to 176l TLC) for the 1TB and 2TB also cut the total dram cache to 512MB for all size tiers 1TB and above. There has never been a 4TB iteration with 4GB of dram cache.
  2. This is caused by whatever your motherboard, headphones or soundcard has implemented Nahimic into, if they bothered rebranding it at all. Get rid of Nahimic, its buggy trash.
  3. You will need to work your way through c:\$WINDOWS.~BT\Sources\Panther\setupact.log to find culprits, MySql, Intel Support Assistant and PowerDVD are among some applications reported to block the upgrade. c:\$WINDOWS.~BT\Sources\Panther\setuperr.log is an attempt to cut down on the mass of information, but it is often non-conclusive as to where the failure occured. Using msconfig to set diagnostic startup is usually enough to get past this issue, because it involves third party services being started up during the OOBE migration phase, it has literally nothing to do with device drivers as those were already migrated during First Boot migration phase.
  4. this is incorrect, testmem5 will force applications to page out in order to test all memory, testmem5 and prime95 are better at finding heat related memory stability issues than anything you can boot into.
  5. should be fixed on nvidia driver 526
  6. I've been digging into this for some time, there is an object leak occuring in the desktop explorer.exe process on some machines that is antagonised by things like DPI changes, it is GPU driver agnostic but having the shell on a different gpu would mask the issue by preventing it from affecting game function. I have already contacted Renton about it on reddit, but if anyone else is seeing the same thing can they list things such as their configuration on the desktop, shell extension and software used, whether or not it is an upgrade from 7 or 8, etc. Theres a missing variable here as the issue still replicates on a diagnostic startup whilst not being present at all on a second machine. Someone elses resolution involved changing the default image handler (which also changes the in use shell thumbnail provider), but i've had no luck there thus far. have also gone through and disabled all shell extensions too.
  7. This bsodding is due to an incompatibility between the PCIE-ASPM implementation on some motherboards and the 980pro, the 980pro goes into a low power state and port reset is issued which results in the volume handles being invalidated, this causes the kernel to generate a critical exception and throw the WHEA error. Solution is to either set LSPM to Off in the power profiles in windows, or disable PCIE-ASPM in bios.
  8. UEFI/BIOS rebuild the dmi and nvcache on hardware changes and asus don't change cvar casings or names between firmware revisions it had nothing to do with lane count settings and everything to do with buggy ASPM reducing pcie clock gnerator and width. UEFI has no module that "Scans" and "Sets" a child device to correct settings beyond that of the cpu bootstrap and memory training. Your experience is irrelevant, there are considerably more 980 pro users that did(and still do) have the issue with ASPM on Zen, x99 and x299 platforms and you might have self mitigated the issue by setting your LSPM power setting to Off/ Yes you can, by enabling hidden menus with bios mods but in the absence of those mods, not being able to change them to a reduced total in the first place means it had nothing to do with the lane config in the nvram. These have predefined defaults that don't change because you didn't have a device in the slot previously.
  9. The issue was you had PCIE ASPM enabled for that particular port, which the bios reset turned off the 980 Pro's have both performance and reliability issues with this feature turned on
  10. AMD turned down AER reporting in Agesa 1.0.3 for Zen2 and later, it only logs fatal errors now.
  11. This stutter is the result of a faulty cpu, it occurs during spikes of correctable pcie bus errors that is inherent to the ryzen platform.
  12. as shrimpbrime doesn't actually appear to know what hes on about where interpretting the results of LatencyMon, and is wrong about hard page faults, and this is a top result for interrupt to process latency, i'm gunna chime in here. Hard page faults are a fact of life and it is not possible to achieve 0 hard page faults on a system running any antivirus or antimalware utilities. The correct and only way to diagnose this is to use process explorer and start closing applications using a consistent above 1% amount of cpu, and then close applications until the Interrupts process cpu usage is lowered. PS: A usb cable for a wireless mouse or extension with nothing connected can result in bus noise that spikes this value above 10k.
  13. The connector plastics and metals do not melt just because you exceed 150w, melting of a molex connection is the result of high resistance fault demonstrated as I = V/R, and a result of loose fittings, dirty or degraded terminals(oxydized). The terminals and connectors on the cable should always be rated inline with or better than the cable you're using, any legitimate overdraw (ie, daisychaining a 3080) will heat the cable before the terminals themselves and demonstrate measurable voltage drop well before beginning to melt. T PCI-SIG could as soon as next spec decide that PCIE boards can take 250w per molex 8pin without adding a single extra live pin, just by raising their artificial limit on how many amps is drawn per pin (Current limit is 4a) The PCIE ATX spec has only ever been about the Device's power draw per each connector on the card, who makes the cable, the psu, etc, is irrelevant. Where these are relevant is when feeding a high current multisocket device, a psu vendor that has used 9a capable 12v cables, terminated them properly and mounted them into a psu backplane capable of outputting the entire draw, can pull atleast 300w down that cable across 2 8pin connectors. This isn't likely to happen if a graphics card is 300w, the PCIE slot will make it more likely 130w comes down the cables. and 40w (to a max of 66w) from slot (give or take the card vendors power balancing). EVGA recently fucked this up with a few of the 30 series products and couldn't be bothered to put out a recall notice, (so if you have an EVGA card, take a peek at the power (w) usage with gpu-z) PCIE delivers 66w, you're only counting 12v lines here when counting supply power for a graphics card. you have to buy a timebomb to only have 150w available at the psu side, the pcie spec is only accounting for the minimum supply power at the psu side and the maximum draw power at the device side. what the psu can actually supply is entirely up to the psu vendor in its design of the psu and the cable quality, a single rail psu built to supply 8a per pin and wiring adequate to that spec could supply enough for the basic operations of a 3080, so long as the OCP for that socket on the psu is set such so as to not trip at the transients. Become a PCIE SIG member, the data is locked behind membership. Buildzoid is not a qualified electrical engineer, This is the blind trusting the blind.
  14. d3 and d4 are not cause for concern at all, these are information led codes.
×