Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


About catawalks

  • Title

Recent Profile Visitors

108 profile views
  1. Haha, I know them feels. I've got a few more places up the ranks I want to climb before I call it quits for a while.
  2. Haha, no worries. I was just joking, thanks for the hard work putting on the event and good luck with the work still ahead of you. God speed my man.
  3. Did I meet the requirements? I hope I got enough WUs and points.
  4. I've got at least one more day in my systems. I'll be keeping them all up until the morning of the 6th. After that though I will start setting them to FINISH and commence the cleanup.
  5. That'll help me do more of the configuration through SSH which I was attempting to do. I didn't know enough the X authority file or even where to look or what it all even meant. But I'll check that out when I get back to work on Monday. As for the clocks offset, would that be like overclocking the card? And would that affect which power state it was in? For whatever reason, the issue that it looks like I'm having now, is that after about 20 minutes of folding the card drops from the maximum performance state (P0) all the way down to one of the lower power states (P8). The power states on this card when I queried them with nvidia-smi were from P0 down to P12 as the lowest. So the state it's getting dropped in to is really low. It might be my own lack of knowledge when installing the necessary drivers while trying to get the FAH slot to start folding. After the folding event is over, I think I might try to wipe the machine and start over. Do you have any specific list of things you install when setting up a linux folding rig? I tried to follow one of the tutorials on here for the most part but because I was using Debian I think some extra steps are needed.
  6. I think that 1530mhz clock speed explains some of the difference. I don't know if it's because the Titan's were designed more for other tasks, but my Titan V gets up to around 1800mhz (usually bounces around somewhere in the 1770-1870mhz area) depending on the temp and ambient air. Thanks for checking those clocks for me, saves me the effort of setting up another VM just for that.
  7. 1530 is indeed what it is running at. Seems like that is the max clocks of that card. I know when I setup my Titan V it would stay at a lower power level unless I specifically told it to run faster. I use something like Privacy.com to make a virtual credit card. You can keep making new numbers and you can also set limits. That way if you accidentally go over on the GCP free credit you'll be limited with how much they charge.
  8. @Favebook You should be able to run "nvidia-smi -q -d CLOCK" and it'll show you a list of clock speeds. The top one should be listed as current clocks.
  9. I decided since I set up a linux box for the 30c/60t server that I have that I might as well throw one of my lower power GPU's in there and see what kind of PPD bump I might see from going with Linux. After much trial and error I got all the drivers installed, openCL installed, and the slot to start folding. Yay! Then I noticed that the clocks were stuck at 1189mhz instead of the max 1721mhz that this little Quadro P2000 can do. Long story short, I got everything configured with "nvidia-settings" and "nvidia-smi" but after folding for 20 minutes or so the card will drop clocks down to 607mhz (P8) and stay there. The following are the settings I configured to get it working at full potential, to be honest I can't remember if the problem was happening when I left the card to do its own thing at 1189mhz. //Sets power state to P0 (Maximum Performance) nvidia-settings -a [gpu:0]/GPUPowerMizerMode=1 //Enables fan control nvidia-settings -a [gpu:0]/GPUFanControlState=1 //Sets fan speed to 100% nvidia-settings -a [fan:0]/GPUTargetFanSpeed=100 //Enables persistance mode (On by default but set it anyway just in case) nvidia-smi -pm 1 //Sets "Application Clocks" to the highest (1721mhz core, 3504mhz mem) - Increases clock above the standard 1189mhz for CUDA workloads nvidia-smi -ac 3504,1721 I know these I'm running Debian 10.1 with Nvidia driver 418.74, the system does have a monitor attached to it, and I have tried decreasing the clocks down 100mhz. I know this card can handle that 1721mhz clock though as it does it just fine when folding in the Windows 10 machine that it came out of. I think I am in over my head and at the end of my Google-fu. If anyone has any ideas I'd appreciate them.
  10. What kinda ppd output do those V100's have? What are the clocks running at and what power state are they in?
  11. I'm with @Favebook, I like the optimism but I doubt we'll keep up enough momentum to continue the climb. I do think we can snag [H] though before the event ends, but keeping that spot will be hard. I myself will definitely be cutting back production a ton. I've got 9 machines and 4 GCP instances chugging along. I'll let the GCP's eat up the remaining balance but the rest of the hardware needs to take a break and cut back on power consumption for a bit.
  12. Ah yes, I've had that happen a few times as I move cards between machines. It's because the config file still shows two slots but there is only one card in the system. I usually close FAHControl, kill all the process relating to FAHClient in task manager, and then delete or rename the config file. Then you can launch FAH again and it'll ask you to set up the client again. Just put in your username, folding team, and passkey again, then configure the slots as you had them before. Alternatively you can close everything like above but manually edit the config.xml file and remove the offending GPU slot.
  13. Woke up to the download bug on 2 machines while 2 more went offline entirely. And they are machines I keep at work so I won't be able to fix them for at least another hour or two. I don't think I've gone a full weekend without having some trouble. All week It'll stay fine, but once the weekend hits, those machines better watch out because one of them is gonna have a problem.
  14. It's not too late! We can still get there if we try.