Jump to content

abrackers

Member
  • Posts

    14
  • Joined

  • Last visited

Awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

abrackers's Achievements

  1. Those Numbers seem very low. My vega 64 gets 1.0 - 1.3m ppd on standard wu's.
  2. I believe that is because any wu's with over 2 million atoms are disabled on AMD gpus until fixed as they had an issue where they could never finish the wu.
  3. I have now figured out why I couldn't oc vram past stock. I gave +50mv to vram voltage (which actually controls soc) so its now 1000mv and I can OC vram to 1100mhz
  4. Does anyone with a vega 64 know, should I also be adjusting vram memory timing settings, or vram voltage while overclocking it? I am overclocking in the radeon software.
  5. Thank you. Yes I guessed it was a bug. Just worried it might be indicating something in the gpu core dying. I was point out folding because it was a very good indicator of vram stabality with my previous gpu (It could run games fine but needed a vram uc to not get errors). Also, the vram oc did significantly increase ppd (1.05m ppd - 1.2m ppd)
  6. I recently got a Vega 64 LC and am trying to OC the vram. I am using the newest 2020 drivers which auto suggested 1051mhz, which works fine in folding@home with no errors. However it crashes the gpu drivers with "radeon host application has stopped" in half life alyx and in superposition benchmark. Testing 975mhz also crashed the driver in superposition. I then tested stock 945mhz in superpostion and it works, but afterburner reports rediculously high vram temp spikes. Anyone have any idea what could cause these, if I should be worried about them, and if there is anything I can do to OC my ram without having the driver crash?
  7. Does anyone know if I change gpu (turn off system, take out gpu, put in new gpu of different architecture), will folding loose its current work unit on the gpu?
  8. With an r9 295x2 its bios temp limit is 75 degrees as its watercooled to stop the clcs dying. With both gpus running at full power and full clocks it should run at 60~65 degrees according to reviews when it came out. I also know its not paste as i changed that about a year ago to hydronaut and it ran perfectly then. Yes I am aware with air cooled 70 is very cool, and most gpus target 80 degrees instead (two r7 360s in my parents pcs downstairs do), but with clc water cooling it kills the pumps
  9. I've ordered a new gpu anyway given it is overheating far too much to entirely be airflow through the rad. I'm guessing one of the pumps has gone, although it is very old and also having severe memory issues when folding (can't get more than 2% in unless I've downclockes the memory.) Speaking of which, how do I get afterburner to save settings over a reboot? Just lost a bigish gpu work unit as it hadnt saved my under clock on memory from yesterday and errored out.
  10. Yes radiator gets almost hot enough to burn when gpu max set to 70 degrees in afterburner. Also the fact I haven't noticed before shows how little any games I play use both gpus, so I'm much better off getting a new single card anyway
  11. So this has taught me I finally need a new gpu. Running an r9 295x2, with one gpu running it downclocks to ~900mhz to stop overheating. With both running it downclocks to ~300mhz and still overheats. I also needed to underclock the vram to stop it erroring out of folds. One gpu of it still gives ~400k ppd when it actually gets a wu though
×