Jump to content

LTT Folding Team's Emergency Response to Covid-19

Go to solution Solved by GOTSpectrum,

This event has ended and I recommend you guys head over to the Folding Community Board for any general folding conversation. 

 

 

18 minutes ago, 42thgamer said:

That can still get quit a bit of work done!

I added system info to my profile if your still interested in the full specs

Link to comment
Share on other sites

Link to post
Share on other sites

So the bill payer put a stop to me folding, as it was going to increase the electricity bill by ~£30/m which is understandable.

I've gone a few days and this topic has blown up even more than it was wow!

I've decided to run just a couple of WUs each day - I'll get my 780 working and tell it to finish and then my 980 usually runs one or two more than that, so at least it's something and it won't have such a big impact on the electricity bill.

With that said, I had a phone call earlier and have managed to get a job for the next few weeks at least, so I'll be able to contribute towards the bills, so may be able to get up and running 24/7 again. But, as it's only a 780 and 980 I think I would prefer to put a better GPU into the machine which will use a similar amount of power but be able to contribute more to the project.

My brother's 1660 ti is very affordable card and was doing fairly well with the WU's so that's an option, though I doubt I'll end up replacing the GPU's as it's not an essential thing to buy at the moment.

:)

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, 42thgamer said:

Don't think so. Maybe dust off your pc a bit ;)

This build is a week old 😛

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Plexas said:

Are the new GPU WUs any different from before? They're really hammering my 2070 super. Its hovering 5C hotter than it used to at the same ambient temp :o 

I think it’s to do with the atom count of each WU. On the lower ones my 2070s sits around 64 degs at ~91% utilisation. On the higher atom WU’s it’s at about 68-69 degs ~97-98% utilisation.

 

Perhaps you’ve been getting a run of higher atom count WU’s 🤷‍♂️

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Sevilla said:

I have 2 machines that will be folding. Will I need to assign different names to it and register twice with the 2 different names? New to folding.

No, you can set all machines to the same account / team / fodling name.

Make sure to use the same passkey for both

 

MSI B450 Pro Gaming Pro Carbon AC | AMD Ryzen 2700x  | NZXT  Kraken X52  MSI GeForce RTX2070 Armour | Corsair Vengeance LPX 32GB (4*8) 3200MhZ | Samsung 970 evo M.2nvme 500GB Boot  / Samsung 860 evo 500GB SSD | Corsair RM550X (2018) | Fractal Design Meshify C white | Logitech G pro WirelessGigabyte Aurus AD27QD 

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Plexas said:

Are the new GPU WUs any different from before? They're really hammering my 2070 super. Its hovering 5C hotter than it used to at the same ambient temp :o 

Some of the larger atom-count Core22 WUs will give your GPU a work-out almost as much as Furmark will. They're just that efficient at using all the shaders/cores on a GPU. Think of it as a really good long-term stability test of the GPU and your cases cooling.

 

My EVGA 2070 Super Hybrids will run at 51C for weeks on end but the odd WU will bump them up to 58C

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Gorgon said:

Some of the larger atom-count Core22 WUs will give your GPU a work-out almost as much as Furmark will. Their just that efficient at using all the shaders/cores on a GPU. Think of it as a really good long-term stability test of the GPU and your cases cooling.

 

My EVGA 2070 Super Hybrids will run at 51C for weeks on end but the off WU will bump them up to 58C

Well that explains the +5c bump on some workloads.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, RestlessRancor said:

So the bill payer put a stop to me folding, as it was going to increase the electricity bill by ~£30/m which is understandable.

I've gone a few days and this topic has blown up even more than it was wow!

I've decided to run just a couple of WUs each day - I'll get my 780 working and tell it to finish and then my 980 usually runs one or two more than that, so at least it's something and it won't have such a big impact on the electricity bill.

With that said, I had a phone call earlier and have managed to get a job for the next few weeks at least, so I'll be able to contribute towards the bills, so may be able to get up and running 24/7 again. But, as it's only a 780 and 980 I think I would prefer to put a better GPU into the machine which will use a similar amount of power but be able to contribute more to the project.

My brother's 1660 ti is very affordable card and was doing fairly well with the WU's so that's an option, though I doubt I'll end up replacing the GPU's as it's not an essential thing to buy at the moment.

:)

Don't sweat it, take care of yourself and family first, be able to pay your bills and do what you can.

A good money saving measure can be to only fold during off peak hours if your billing is setup that way.   

 

Have a look if you get a cheaper electrical rate during the night etc. and only fold then, it can really help when you're on a budget.

Hardware & Programming Enthusiast - Creator of LAR_Systems "Folding@Home in the Dark" browser extension and GPU / CPU PPD Database. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Gorgon said:

Some of the larger atom-count Core22 WUs will give your GPU a work-out almost as much as Furmark will. They're just that efficient at using all the shaders/cores on a GPU. Think of it as a really good long-term stability test of the GPU and your cases cooling.

 

My EVGA 2070 Super Hybrids will run at 51C for weeks on end but the odd WU will bump them up to 58C

Thanks for clarifying mate. Right now both my 3700x and 2070 super are sitting on 62-64C at 100% usage each (fluctuates a bit up and down depending on the WU). No thermal issues here, was just wondering :) 

Link to comment
Share on other sites

Link to post
Share on other sites

Spoiler

:49:55:WARNING:WU01:FS02:Exception: Failed to send results to work server: Transfer failed
12:49:55:WU01:FS02:Trying to send results to collection server
12:49:55:WU01:FS02:Uploading 49.92MiB to 155.247.166.219
12:49:55:WU01:FS02:Connecting to 155.247.166.219:8080
12:50:01:WU01:FS02:Upload 5.76%
12:50:07:WU01:FS02:Upload 12.77%
12:50:13:WU01:FS02:Upload 19.66%
12:50:19:WU01:FS02:Upload 26.54%
12:50:25:WU01:FS02:Upload 33.55%
12:50:31:WU01:FS02:Upload 40.69%
12:50:37:WU01:FS02:Upload 47.45%
12:50:43:WU01:FS02:Upload 54.59%
12:50:49:WU01:FS02:Upload 61.72%
12:50:55:WU01:FS02:Upload 68.86%
12:51:01:WU01:FS02:Upload 75.87%
12:51:07:WU01:FS02:Upload 82.88%
12:51:13:WU01:FS02:Upload 89.89%
12:51:19:WU01:FS02:Upload 96.65%
12:51:22:WU01:FS02:Upload complete
12:51:22:WU01:FS02:Server responded WORK_QUIT (404)
12:51:22:WARNING:WU01:FS02:Server did not like results, dumping
12:51:22:WU01:FS02:Cleaning up

Does this mean that even thought he WU completed it didnt go through?

 

Took of the OC on GPU for now too

 

MSI B450 Pro Gaming Pro Carbon AC | AMD Ryzen 2700x  | NZXT  Kraken X52  MSI GeForce RTX2070 Armour | Corsair Vengeance LPX 32GB (4*8) 3200MhZ | Samsung 970 evo M.2nvme 500GB Boot  / Samsung 860 evo 500GB SSD | Corsair RM550X (2018) | Fractal Design Meshify C white | Logitech G pro WirelessGigabyte Aurus AD27QD 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Gorgon said:

Think of it as a really good long-term stability test of the GPU and your cases cooling.

 

My EVGA 2070 Super Hybrids will run at 51C for weeks on end but the odd WU will bump them up to 58C

 

Yup! Plus people can check if their Overclocks are producing bad work units also in the log.

 

My old man's Super clocked EVGA Hybrid 980Ti produces loads of bad work units, and it's just a factory overclocked one. It maybe gets a single project done a day without issue

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

Finally something to load the home lab 86 cores + 1 gpu on the job. I'm trying to throw my old GPU into another box but I don't want to use its video out any help?

4aef6a5caf3488915ebef485b687b367.png.e07ae335c8d554d2879f71fd718e4e75.png

Link to comment
Share on other sites

Link to post
Share on other sites

Ohhhhh snap!  Im a:

 

image.jpeg.d21f856395cbeb791b1b1f65bed430c1.jpeg

El Zoido:  9900k + RTX 4090 / 32 gb 3600mHz RAM / z390 Aorus Master 

 

The Box:  3900x + RTX 3080 /  32 gb 3000mHz RAM / B550 MSI mortar 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, oal32 said:

Finally something to load the home lab 86 cores + 1 gpu on the job. I'm trying to throw my old GPU into another box but I don't want to use its video out any help?

4aef6a5caf3488915ebef485b687b367.png.e07ae335c8d554d2879f71fd718e4e75.png

You should be able to add a GPU to a system (I assume a server with on board graphics in your case) but not use it as the video out via a setting in the bios.

Look for settings that let you specify on-board graphics as default video output device and manually set that once you have added the graphic card.

 

I have been able to do this on my ASUS / Intel systems.

 

Otherwise if you have two GPUs in a system you should be able to set which one is the default display via BIOS or by simply connecting your monitor two the one you want to use depending on BIOS / OS used.

Hardware & Programming Enthusiast - Creator of LAR_Systems "Folding@Home in the Dark" browser extension and GPU / CPU PPD Database. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, LAR_Systems said:

You should be able to add a GPU to a system (I assume a server with on board graphics in your case) but not use it as the video out via a setting in the bios.

Look for settings that let you specify on-board graphics as default video output device and manually set that once you have added the graphic card.

 

I have been able to do this on my ASUS / Intel systems.

 

Otherwise if you have two GPUs in a system you should be able to set which one is the default display via BIOS or by simply connecting your monitor two the one you want to use depending on BIOS / OS used.

Would dummy plugs also help with that?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, LAR_Systems said:

You should be able to add a GPU to a system (I assume a server with on board graphics in your case) but not use it as the video out via a setting in the bios.

I cant put them into the server because they're only half height plus that would be too easy. I have an old desktop i can put it in. All of my monitors are VGA but the new/old GPU doesn't have that output so i cant even see the bios to change it. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, MidKnight Reign said:

Would dummy plugs also help with that?

Dummy plugs will not help your folding @ home PPD etc. but if you want to control the machine via some form of remote desktop are very helpful in setting good resolutions and getting a refresh rate on RDP that's not terrible.

Amazon post GPU mining craze sells packs of 3 HDMI plugs for about 15-20$

Hardware & Programming Enthusiast - Creator of LAR_Systems "Folding@Home in the Dark" browser extension and GPU / CPU PPD Database. 

Link to comment
Share on other sites

Link to post
Share on other sites

I seem to be getting larger WU's to work on. I am hitting 2 hours for a single gpu WU. I was only getting 1.5 hour ones before. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, oal32 said:

I cant put them into the server because they're only half height plus that would be too easy. I have an old desktop i can put it in. All of my monitors are VGA but the new/old GPU doesn't have that output so i cant even see the bios to change it. 

 

Yeah, that's a bit more tricky...so the desktop case does not have a CPU with on-board graphics to let you configure bios after adding the GPU?

If that's the case the simplest solution might need to be getting an adapter for the video output the GPU does have to VGA

Hardware & Programming Enthusiast - Creator of LAR_Systems "Folding@Home in the Dark" browser extension and GPU / CPU PPD Database. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, SCSI15 said:

I seem to be getting larger WU's to work on. I am hitting 2 hours for a single gpu WU. I was only getting 1.5 hour ones before. 

 

 

Me too...I have one that is taking over 3 hours...

Link to comment
Share on other sites

Link to post
Share on other sites

image.png.c745f94ed60ee50e6a07fe056677654f.png

 

Take that poopy virus!

El Zoido:  9900k + RTX 4090 / 32 gb 3600mHz RAM / z390 Aorus Master 

 

The Box:  3900x + RTX 3080 /  32 gb 3000mHz RAM / B550 MSI mortar 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, SCSI15 said:

I seem to be getting larger WU's to work on. I am hitting 2 hours for a single gpu WU. I was only getting 1.5 hour ones before. 

 

 

Getting those chonky WU's!    Those can be better, they generally give more PPD and keep you out of the race to get a new WU from the servers longer.

Hardware & Programming Enthusiast - Creator of LAR_Systems "Folding@Home in the Dark" browser extension and GPU / CPU PPD Database. 

Link to comment
Share on other sites

Link to post
Share on other sites

extremeoverclocking is screwing me!

 

Web client says I have 3.3m points, extremeoverclocking says 2.2 since one day 😧

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×