Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
GOTSpectrum

LTT Folding Team's Emergency Response to Covid-19

This event has ended and I recommend you guys head over to the Folding Community Board for any general folding conversation. 

 

 

Recommended Posts

2 minutes ago, ffbr said:

I used to have 35 GPUs in an homemade mining farm (the good time of reaching living in a 30-35°C environment the whole summer). From them 21 asus, never had a problem with them (nor did I with my lonely MSI). Can't say the same for the others. However they did came at a premium. 
 

 

Curious, did your investment in those cards pay off?

I feel you!  I'm in central FL and right now the outdoor temp is *89°F my house is at 85°F.  MY A/C is down and repair man can't come until next week...  I even bought a Noctua Industrial fan to try and help with what ever cooling is available and have the old fan propped up against the the rear card area which will likely have to disappear when the video card comes tomorrow!  CPU temps in the high 80°C to low 90°C range right now after I lowered the power to Medium.  Once the outdoor temps drop below the indoor I'll open up the windows again an be cooler by morning.  I may have to go to low power setting when the video card gets installed.  Which might not be too bad as I hear GPU units have been slow to get assigned.  Haven't had to wait too long at all for CPU work units.

Link to post
Share on other sites
51 minutes ago, GOTSpectrum said:

sudo /etc/init.d/FAHClient Finish 

 

I'm guessing here though, @Gorgon is the best to ask here 

 

32 minutes ago, Dutch_Master said:

Tried and failed, the client doesn't recognise the option "finish", valid options are: stop/start/restart/reload/status/log

 

But thx for trying!

This will work:
sudo /usr/bin/FAHClient --send-command finish

But this is better as you don't have to worry about escaping quotes in scripts:
sudo /usr/bin/FAHClient --send-finish
Link to post
Share on other sites

 

6 minutes ago, GalaxyNetworks said:

Just "FAHClient --send-command finish"

if it can't find FAHClient, then "/usr/bin/FAHClient --send-command finish"

I tried this and it appears to have connected via Telnet (port 36330). Not sure it'll work, for most boxes now have WUs for more then 8 hrs!

1 minute ago, Gorgon said:

 


sudo /usr/bin/FAHClient --send-finish

 

I'll keep that in mind if the other one fails. (not that I'm using sudo, but I'll get around that ;) )

 

Thx guys, much appreciated!


"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to post
Share on other sites
1 minute ago, Boyce1 said:

Curious, did your investment in those cards pay off?

I feel you!  I'm in central FL and right now the outdoor temp is *89°F my house is at 85°F.  MY A/C is down and repair man can't come until next week...  I even bought a Noctua Industrial fan to try and help with what ever cooling is available and have the old fan propped up against the the rear card area which will likely have to disappear when the video card comes tomorrow!  CPU temps in the high 80°C to low 90°C range right now after I lowered the power to Medium.  Once the outdoor temps drop below the indoor I'll open up the windows again an be cooler by morning.  I may have to go to low power setting when the video card gets installed.  Which might not be too bad as I hear GPU units have been slow to get assigned.  Haven't had to wait too long at all for CPU work units.

Well it was an interesting experience, definitely paid off in term of investment (got a car out of it and still have some holding but nothing crazy)

It started more as an experiment into it and I got a bit crazy when the price started to go up in february/march 2017. I used already mined holding to buy new card one rig (7 cards) at a time. Once I got bored/tired of it and that it started to not be anymore viable to mine (expensive electricity here) I sold all the GPUs and Motherboard and was surprised to not lose money on that overall (some sold for cheap some for more than what I paid for). 

Overall it was a very intense experience and teached me a lot. Would definitely not redo it :D nowaday. 

TLDR: didn't got rich, loved the experience, didn't loose money

 


BOINC - F@H

Link to post
Share on other sites
2 minutes ago, Dutch_Master said:

 

I tried this and it appears to have connected via Telnet (port 36330). Not sure it'll work, for most boxes now have WUs for more then 8 hrs!

I'll keep that in mind if the other one fails. (not that I'm using sudo, but I'll get around that ;) )

 

Thx guys, much appreciated!

The web interface should look like this: (GPU 7 has finished and is paused)
image.png.855918fa65bab1f93ae7889213d2dcfe.png
 

Link to post
Share on other sites
1 minute ago, Dutch_Master said:

 

I tried this and it appears to have connected via Telnet (port 36330). Not sure it'll work, for most boxes now have WUs for more then 8 hrs!

I'll keep that in mind if the other one fails. (not that I'm using sudo, but I'll get around that ;) )

 

Thx guys, much appreciated!

Yep, that's how all the communication for FAH works, through telnet at what ever port you have configured, 36330 per default. That really should work, otherwise you have some other problem, it's never failed me and I've used it 10s to 100s of times.

Link to post
Share on other sites
1 hour ago, GOTSpectrum said:

I have noticed the same on my end, never have I seen the beta pool get exhausted in such a manner it is both a beauty and a shame 

I had been running the beta flag for most of the event, and it just recently started sitting idle.  Before they were coming in one after another non stop for days.  Must have gotten to the bottom of that pile!


El Zoido: 9900k / Hydro X / z390 Aorus master / 32 gb Corsair vengeance 3000mHz LPX / RTX 2080 ti / Fractal Define R6

 

The Box: 3900x / DR4 / b450 Asrock ITX / 32 gb Corsair vengeance 3000mHz LPX / RTX 2080 ti / Meshify mini w/ Noctua 140 Blacks

Link to post
Share on other sites
2 minutes ago, Dutch_Master said:

 

I tried this and it appears to have connected via Telnet (port 36330). Not sure it'll work, for most boxes now have WUs for more then 8 hrs!

I'll keep that in mind if the other one fails. (not that I'm using sudo, but I'll get around that ;) )

 

Thx guys, much appreciated!

I have 6 linux boxes and they're all headless so I normally just used teh Advanced Control app on my Windows Daily Driver to control all the slots.

 

Normally I run a cron job on the systems to set the GPU slots to finish about 2 1/2 hours before my electricity rate doubles and then use:

/usr/bin/FAHClient --send-unpause

to restart the slots when the electricity rate drops down.

 

You can also telnet into the host on 36330 and access the command server directly but you have to enable remote access (see link in sig) first.

Link to post
Share on other sites

also good to see you back @GOTSpectrum !


El Zoido: 9900k / Hydro X / z390 Aorus master / 32 gb Corsair vengeance 3000mHz LPX / RTX 2080 ti / Fractal Define R6

 

The Box: 3900x / DR4 / b450 Asrock ITX / 32 gb Corsair vengeance 3000mHz LPX / RTX 2080 ti / Meshify mini w/ Noctua 140 Blacks

Link to post
Share on other sites
2 minutes ago, Zberg said:

I had been running the beta flag for most of the event, and it just recently started sitting idle.  Before they were coming in one after another non stop for days.  Must have gotten to the bottom of that pile!

Nope, just assignment server (AS) issues. Lots of Beta WUs left bu if the AS doesn't direct you to a WS then you won't get any.

Link to post
Share on other sites
Posted · Original PosterOP
Just now, Zberg said:

also good to see you back @GOTSpectrum !

Much appreciated, would like to say it feels good to be back but in fact I'm just trying to distract myself from lack of meds lmfao. 

 

 


My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 7 1700 3GHz 8-Core Processor @ 4Ghz
  • Motherboard
    GA-AX370-GAMING 5
  • RAM
    DOMINATOR Platinum 16GB (2 x 8GB) @ 3400mhz
  • GPU
    Aorus GTX 1080 Waterforce
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    BenQ - XL2430(144hz), Dell 24" portrait
  • Cooling
    MasterLiquid Lite 240
Link to post
Share on other sites
Posted · Original PosterOP
1 minute ago, Gorgon said:

Nope, just assignment server (AS) issues. Lots of Beta WUs left bu if the AS doesn't direct you to a WS then you won't get any.

One think i have just noticed while checking out the server stats is that C21 seems to have been completely depreciated and is not listed on any servers any longer. 


My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 7 1700 3GHz 8-Core Processor @ 4Ghz
  • Motherboard
    GA-AX370-GAMING 5
  • RAM
    DOMINATOR Platinum 16GB (2 x 8GB) @ 3400mhz
  • GPU
    Aorus GTX 1080 Waterforce
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    BenQ - XL2430(144hz), Dell 24" portrait
  • Cooling
    MasterLiquid Lite 240
Link to post
Share on other sites
50 minutes ago, REDYul said:

I leave at stock speed, making sure that the fan are not top noisy, because OC is not very useful, because WU are not constantly available.

What ever you can OC, if you have to wait 15-20-60min you loose all your advantages.

 

Hahah, do not worry about me, I've experimented with all kind of cards, OCs and BIOS-es for almost 2 years.

46 minutes ago, GOTSpectrum said:

They are talking about underclocking here, not overclocking. but with that said @Favebook and @aselwyn1 you're better off just power limiting that actively underclocking most of the time with folding. 

Basically this 👆

For best results, I find stable OC and then Power Limit the hell out of my card (between 60% and 80% is optimal).


Favebook's F@H Stats

Favebook's BOINC Stats

 

CPU i7-8700k (5.0GHz)  Motherboard Aorus Z370 Gaming 7  RAM Vengeance® RGB Pro 16GB DDR4 3200MHz  GPU  Aorus 1080 Ti

Case Carbide Series SPEC-OMEGA  Storage  Samsung Evo 970 1TB & WD Red Pro 10TB

PSU Corsair HX850i  Cooling Custom EKWB loop

 

Display Acer Predator x34 120Hz

Link to post
Share on other sites
2 hours ago, GOTSpectrum said:

This broke my brain a little lmfao 

 

Yeah, household baseline is usually less than 400 watts, since the event began, it looks more like this 🤨

Household power usage.png

Link to post
Share on other sites

🤔 its not about the award, but I sure would like to be able to get the certificate for 1000 WU.

Generator still broken 


BOINC - F@H

Link to post
Share on other sites
48 minutes ago, Metallus97 said:

So how much points did that produce and for which instances and GPUs did you go?

Overall, it was worth around 26M points. Around 870K PPD per card.

 

3 instances of 4 vCPUs and 5GB of RAM with Tesla T4s. I wanted one vCPU per GPU, in hindsight I'm not sure if that was even necessary. I didn't really test it out with less CPUs. Although on the 4 card instances, CPU usage is around 80%, so perhaps it was a good call.

 

Could have gone to 12 cards, as each instance maxes out at 4, but I calculated that 10 GPUs over 3 VMs would use up the credit pretty close to the end of the event. I wasn't far off.

 

 

Link to post
Share on other sites
22 minutes ago, GalaxyNetworks said:

The web interface should look like this: (GPU 7 has finished and is paused)
image.png.855918fa65bab1f93ae7889213d2dcfe.png
 

 

20 minutes ago, Gorgon said:

I have 6 linux boxes and they're all headless so I normally just used teh Advanced Control app on my Windows Daily Driver to control all the slots.

 

Normally I run a cron job on the systems to set the GPU slots to finish about 2 1/2 hours before my electricity rate doubles and then use:


/usr/bin/FAHClient --send-unpause

to restart the slots when the electricity rate drops down.

 

You can also telnet into the host on 36330 and access the command server directly but you have to enable remote access (see link in sig) first.

Thanks guys! I now see what I missed earlier: the spinning circle on a working system is green, but after I ordered it to finish it turned yellow: I'd missed that previously 😳 So I should be good for these boxes to finish their WUs and stop. After which they'll be decommissioned until such time their services are called upon again. Which for one could be fairly soon as it has 2 network ports and I'm interested in experimenting with PFSense. But that's not for this thread ;)


"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to post
Share on other sites
8 minutes ago, RedCom said:

Overall, it was worth around 26M points. Around 870K PPD per card.

 

3 instances of 4 vCPUs and 5GB of RAM with Tesla T4s. I wanted one vCPU per GPU, in hindsight I'm not sure if that was even necessary. I didn't really test it out with less CPUs. Although on the 4 card instances, CPU usage is around 80%, so perhaps it was a good call.

 

Could have gone to 12 cards, as each instance maxes out at 4, but I calculated that 10 GPUs over 3 VMs would use up the credit pretty close to the end of the event. I wasn't far off.

 

 

THATS some power, but I guess a bit more CPU could have helped push them a bit more. I saw 950 on those cards some times.

 

I had even less time because I opted for V100... close to 1,8m PPD but not so nice price to performance 


HELP the LTT F@H TEAM FIGHT COVID-19!! Its easy: 

 

Link to post
Share on other sites
15 minutes ago, Inkertus said:

Your power meter have a stroke there or...?

My daughter made soup from scratch for lunch.

 

It is cool/creepy seeing the little dips in power consumption as GPUs finish and start WUs.

Link to post
Share on other sites
4 minutes ago, Metallus97 said:

THATS some power, but I guess a bit more CPU could have helped push them a bit more. I saw 950 on those cards some times.

 

I had even less time because I opted for V100... close to 1,8m PPD but not so nice price to performance 

I had one V100 that was golden, it consistently pulled 3m PPD during the whole week. No idea why it was outperforming so much the others but that was nice


BOINC - F@H

Link to post
Share on other sites
12 minutes ago, Metallus97 said:

THATS some power, but I guess a bit more CPU could have helped push them a bit more. I saw 950 on those cards some times.

 

I had even less time because I opted for V100... close to 1,8m PPD but not so nice price to performance 

Yep, I noticed that the $/hr cost went up quite steeply for the V100.

 

The Tesla T4 performance is quite good considering it's a low-profile low-TDP card. Think it's only rated at 75W.

Link to post
Share on other sites
Guest
This topic is now closed to further replies.


×