Jump to content

Noctia

Member
  • Posts

    11
  • Joined

  • Last visited

Everything posted by Noctia

  1. Force run graphic card fan at 100% with Precision or Afterburner of your choice, see the fans spins at high speed. A bad VBIOS setting can also lead to fan running at low speed at high temp, this is usually happen by OEM's "This is a quiet GPU! Buy this!" slogan and this is the aftereffect you get. But let's not rule out bad air ventilation from the case and poor GPU surface contact with heatsink. Also dried paste don't ALWAYS mean bad or heat up to extreme temp. It's the new paste that need cure time that heat GPU more than dried paste, also i did not over applied.
  2. In my experience the coil whine can rebound, it's from my PSU. If possible i suggest repositioning your rig to have the whine not rebound to walls back to you (Try back side of the desk on the floor). Maybe add sound absorbing foam to the panels and seal tight the case, which i'm not sure if that will work but feels possible. If upgrade is a must, i wouldn't suggest anything below 1060 6GB for the long run. And research owners feedback threads for with the coil whine because different OEMs uses different components which some models can be bad for you.
  3. GPU core memory controller and VRAM both can cause artifact, depending on the VRAM modules used on your graphic card they might not be very high compatibility for you to OC them 2 well together in the first place. Is a hard to diagnose scenario because sometimes games don't artifact will still crash, and stress test program don't guarantees everyday gaming stability until you encounter a game crash, read graph for the last utilization spike or sudden drop before crash. Tune down something, stable the power delivery in Nvidia CP, and continue your days on. It can take months to findout a stability for all the games. The best game i used for testing GPU/VRAM stability was Witcher 3. 3DMark and Heaven stress test would pass my OC and all games run fine except Witcher 3 will crash until i reduce VRAM to a near complete stock speed. Eventually i got tired and not wanting my VRAM to melt just to leave my expensive working GPU core render useless, nowdays i just run OC GPU and leave VRAM stock, no artifact, no crash, no worries.
  4. Hi guy, so i have this Asus GTX670 that use 2x 6-pins PCIE cable to power, each has its own LED power indicator that it has powered properly. Ever since purchase and 7 years of service it has always have both LED lights up green on the PCB when PSU (Corsair 650W) is turned on. Until recently only LED A is green and no light on B. I went in diagnostic thinking could be PSU not supplying GPU. By plugging only one power cable to test and only LED lights up green, another red. Single CABLE A to slot A = LED A green = LED B red Single CABLE A to slot B = LED A red = LED B green Single CABLE B to slot A = LED A green = LED B red Single CABLE B to slot B = LED A red = LED B green So it mean PSU is working as well the PCB LED lights, but why when i plugin all the cables and only one LED lights up now? Is it an indicator of something is happening? I don't know when this happened, but GPU is functional as i'm using it right now to write this (no artifacting or any sort), and i had been gaming the last week had only one game CTD that couldn't be explained by looking at Afterburner graphs, which i suspect that is when this happened, but i continued to game later with no CTD for days. Could someone make suggestions of what this would mean? :( Thanks very much for helping.
  5. In theory, this kind of cause should either by Head crash or controller board gone bad, but those are usually beyond repairable by tools like WDdiag. This drive seem to have potential of recovering, I have to run surface test in next few days to see if it still causing more bad sectors for no reason. And i still don't know understand the reason what caused it to do that. Can a potential voltage shock from the USB connector cause the HDD controller to go haywire but still functioning?
  6. So i have this very well taken cared WD640GB 2.5' HDD that failed to detect as USB external drive days ago, plug it to mainboard SATA connection OS detect it just fine. There were few NTFS security descriptor table damage but repaired with /chkdisk and salvaged as much data i could before attempt to repair the drive. (Some failed to copy were able to recopy after several spam of retry button but some were not able to get past retrying.) I then use WDdiag tool to do a SMART Quick Test but it couldn't finish with fail status code 07 in just few seconds. I resort to Full Write Zeros and Extended Surface Test both ops completed with NO error. Quick Test also shown no problem even after power cycled, there were some Raw Read Error Rate count in SMART log but that count did not increase after Write Zeros. Drive was then taken out uninitialized, stored in anti-static bag for a night rest. Next day when i plug it to reuse it, a checks up show Quick Test fail again but after a minute mark this time. Full Write Zero completed once more with no error however Surface Test tells me there are bad sectors because the scan hiccups, by cancelling Surface Test after a few minute that made Quick Test pass again, but bad sectors were actually detected this time in Surface Test and repaired it. So how does one drive maintained and worked perfectly for one day can go nuts the next day? There were no static-electric, magnetic field and humidity in my room, i even warm up the drive abit to prevent HDD arms and motor cease from the low-temp, it was placed near my working laptop and that's it. How is that possible to ruin a drive? I wasn't convince the drive simply gone faulty because that would cause it not able to complete any of those maintenance ops but what gives? x_x
  7. Yup, the house looks pretty historic and those outlets in the rooms has absolute no ground wire as i inspected, as an major hardcore PC gamer and first timer to see 2 prong outlets, that traumatized me easily. Without much spare money laying around i'll just have to be as ghetto as possible to ground this PSU..
  8. Kay i'll try my best to find a 'thing' to ground the PSU because for sure the house doesn't have any ground wiring in all the rooms except heavy duty wires for laundry and kitchen appliances. As for loose cables in the PSU concern, i can guarantee the safety on that So far no one mention about frying the mainboard/PSU so i guess is all green for that end? I also will be using 3-to-2 prong adapter. Any more comments with your experience with this still welcomed as i still have few days before start rebuild my PC.
  9. Thanks for the tips, but getting the house rewire is not possible at this and for a long time. Most threads i read mention electric shock from the PC metal case for not grounding but i'll be running the PC bare, hook the mobo to a plastic frame for wall hanging and no one can touch it. Is it still viable? I just need to know if my PC would get fried for not grounding, rest of the user safety hazard i can do my best to avoid it.
  10. I have a gaming rig with Corsair TX650M 650W PSU and i recently relocated to a house that has only 2-prong outlets missing a ground/earth wiring. Before piecing my PC back together i have question of how will the mainboard and PSU cope with electric current that may need to be grounded/earthed? Are there users in this forum running high wattage PSU gaming rig fine with 2-prong outlets? I've googled this question but did not find definite answer
  11. Hello all this is my time here on this forum. Hope you can assist with your overclock expertise, first the relevent specs: 3750K Gigabyte Z77-D3H rev1.0 (F23 (non beta) latest bios) GTX670 Corsair TX650W V2 Here's what going on. A month back i did a full maintenance on this rig, i delidded the CPU and took everything out clean and put it back, and my plan was to update the mobo BIOS and bump the clock speed from 4.2 to 4.5GHz, and i was doing stability test running on iGPU make sure everything is solid before put GTX670 in. While using Dynamic VID the CPU was stable at 1.200v at 4.5GHz on all prime95 as well Aida64 stress tests for hour. However when i installed GTX670 that voltage requirement bumped to 1.284v and can even hit 1.29v spike (very rarely). Trying anything lower then 1.28v cause BSOD. 'God damit' kicks in my head. Now the question, Is the GTX670 still taking power from the PCIe port even its being fed by 2 cables? Or anything i missed? Thanks.
×