Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


About BlueJedi

  • Title

Contact Methods

  • Discord

Profile Information

  • Gender
  • Location
  • Interests
    Computer Hardware, Physics, Computational Physics, Numerical Analysis, Philosophy, Gaming
  • Biography
    Life long PC enthusiast. Former PC and Laptop technician. Currently a PhD Candidate at the University of Saskatchewan studying Quantum Gravity.
  • Occupation
    Grad Student


  • CPU
    Threadripper 1920X
  • Motherboard
    Asus X399 Strix-E
  • RAM
    G.Skill 32GB 3000 CL14
  • GPU
    Sapphire Vega 56 Pulse
  • Case
    Phanteks Evolv X

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. If microcenter can't help you you can try Gigabyte and go the RMA route. It could be that particular card has issues. Having no idea what you tried I will add it's worthing using DDU to remove old drivers and then install the latest. Especially if you have updated Windows 10 to version 2004. It has a lot of changes focused at GPUs. Also make sure your RAM is stable as configured. A bad XMP profile can cause issues for Vega/Navi cards.
  2. Sounds like file permission issues or anti-virus software issues. My guess is it can't write to the log, the config, or other files. I'd add exceptions for FAHClient in whatever anti-virus is on the system if any, including Windows Defender. Track down the locations where it sticks the logs and config files and make sure the file permissions are right.
  3. What clock speed is it hitting before you do that? PPD estimates are based on TPF, which can vary a lot throughout a WU. How much is TPF going down? Does it stay there? How much is PPD being pushed up on average?
  4. As long as your card isn't stuck at idle clocks (<1000) or thermally constrained I wouldn't worry. It still appears to be 100% utilized at those lower clocks so it's probably working as intended. Compute work loads are handled differently by the driver. Most GPUs don't try to hit max clocks on certain workloads. Depending on the kind of work, and how it's setup, there might not be much benefit to clocking faster for a particular WU. Some computational work is also more power intensive. So the GPU clocks back a bit to keep power and thermals under control. You can certainly trick it into higher clocks like you did there with Furmark but the gains will be minimal. And you risk causing compute errors that may cause the WU to fail.
  5. It may have been defective out of the box then. If the card is unstable at stock settings I'd contact Gigabyte support. It might not be the source of your problem here, but sounds like a strong possibility. If nothing else it's best to rule it out.
  6. Go here: https://www.guru3d.com/files-details/display-driver-uninstaller-download.html Read everything before you download and run, there's instructions there. General idea is you'll want to boot into safe mode, run this, reboot, then reinstall with the latest drivers. It's a third party tool to remove everything the AMD/Nvidia drivers leave behind.
  7. The GPU might not be dead but it could be dying. Components coming back to life after being completely powered down for awhile is often a sign of failing hardware. My only other guess is if secure boot is enabled it might be causing issues with the card POSTing correctly.
  8. Did you use DDU to uninstall the old drivers? The normal uninstall leaves a lot behind. Some of it can interfere with performance, especially when you jump brands or a few generations on the card.
  9. Not too sure... but EC2 Spot instances are an Amazon clould service. So not so sure that's an individual, it might be Amazon directly?
  10. I run all AMD GPUs (been a budget buyer for years) and I see that exact thing across all my cards. A few other people here have been too. So you're not alone. It's not isolated to specific projects, at least in my experience. One or more of the early OpenCL calls are sensitive to instability on the AMD driver. Once it gets going its fine. So card stability definitely plays a big part in how many failures you get. My factory OCd 280X, that can run about as hot as your 390, has the most failures out of all my cards. It's on a Linux box and too old to down clock in the power tables. I'll have to throw a custom BIOS on it when I have time. Ultimately the root issue is the AMD OpenCL driver itself though, instability just makes the issue worse. I know from my own cards that the problem persists on Tahiti, Polaris, Vega 10 and 20 on both Windows and Linux. Even my coolest cards running stock have WUs fail to initialize, just less so. I've been talking to AMD about it and it's a known issue, and supposedly high priority, but I haven't been given any firm answers on when it will be fixed. That isn't to say FAH or the OpenMM people couldn't find a work around for Core22. Whether it's worth their time, when the Nvidia driver is fine, I couldn't say. I'm with you though, I hope someone, anyone, fixes it. I have 105 failed WUs to 161 completed across all my cards since I starting logging for the event. All failures were at initialization. I can't imagine its helping me get WUs right now. My PPD is probably a third what it could be.
  11. Unfortunately I'm not aware of a way to limit it in that sense. The WUs likely aren't all that big, just computationally intense, so max packet size only helps to a point. What WUs you're getting is probably down to what instruction sets your CPU has compatibility for. What CPU your client gives in its configuration is used by the server to decide what WUs to assign, I believe. I imagine the solution is for the FAH people to up the requirements for those WUs. It might be handing them out to any CPU with AVX2 compatibility, when practically that list needs to be trimmed a bit. This may also be a consequence of them trying to get as many WUs out there as they can.
  12. Check the logs. On windows they're usually in: C:/<YourUser>/AppData/Roaming/FAHClient/logs and you can open File Explorer and type: %appdata% in the top navigation bar to go right to the AppData folder for your user. Take a look at the log files and it should give some idea what FAHClient.exe was failing on or at least what it was trying to do last. Post any relevant snippets here, should be the last stuff in the log, and we can sort through it with you.
  13. True. That's why I don't necessarily think it was the solution for someone running multiple power hungry systems. As a diagnostic step, a small decent UPS rules out power entirely and can be fairly useful for overclocking and testing. It might even shed some light on the quality of the wall power under load.
  14. That's fair. Not suggesting a permanent solution for all your systems. If the issue is because you're seeing power dips, than a line conditioner will only get you so far. A good line interactive UPS is a line conditioner too and makes up for any shortfalls from the wall. They usually give somewhat helpful real-time power stats too. Can be helpful to see what the wall power is doing while you're overclocking.
  15. Have you tried a UPS on one system? Something line interactive?