Jump to content

Folding Community Board

Go to solution Solved by GOTSpectrum,

 

I've noticed a fairly big temperature difference while folding on my laptop. Today afternoon, it was at 61c in the middle of a WU that was giving me 1.4 million PPD (higher than usual), and now it's at 73-74c despite the room being cooler and the fans spinning up and down. It's nothing compared to what these go through when gaming, but still wanted to check if these temp swings are okay for long term use?

Desktop 1 : Ryzen 5 3600 (O/C to 4Ghz all-core) | Gigabyte B450M-DS3H | 24GB DDR4-2400 Crucial(O/C to 2667) | GALAX RTX 2060 6GB | CoolerMaster MWE 650 Gold

 

Desktop 2 : i5 10400 | 32GB DDR4-3200(@ 2667Mhz) |  EVGA GTX 1070 SC 8 GB | Corsair CV450M

                        

Laptop : ASUS ROG Strix G17 : i7-10750H, 16GB RAM, GTX 1660Ti 6GB(90W), 1TB NVMe SSD

 

Yoga 3 14 - i7-5500U, 8GB RAM, GeForce GT 940M, 256GB SSD

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, RollinLower said:

don't remember sadly, though i believe it wasn't that old, maybe 2/3 versions. I did reinstall to 22.04 recently and that also meant reinstalling the drivers. Maybe i just installed the wrong ones? It's Linux so i ran into some issues doing the reinstall ofcourse, and i finished that at like 2AM 🙃 anything can happen if you work on a pc untill 2 AM.

I believe fresh 22.04 LTS installs an earlier 510 stream driver (510.60.02?) i.e. with 22.04 you don't need to add the NVIDIA Drivers PPA like you did with 18.04

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, RollinLower said:

don't remember sadly, though i believe it wasn't that old, maybe 2/3 versions. I did reinstall to 22.04 recently and that also meant reinstalling the drivers. Maybe i just installed the wrong ones? It's Linux so i ran into some issues doing the reinstall ofcourse, and i finished that at like 2AM 🙃 anything can happen if you work on a pc untill 2 AM.

Upgraded from 510.60.02 to 510.73.05. We'll see what happens when I resume folding at 19:00EST when electricity prices drop.

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/30/2022 at 12:12 PM, rkv_2401 said:

It's nothing compared to what these go through when gaming, but still wanted to check if these temp swings are okay for long term use?

These temperatures sound normal for a laptop - are they the GPU temperature? Different projects tax the GPU to different levels, based on both the molecule size and the calculations done, so it's normal that the power use and temperature will vary between projects. I think the most important thing to watch for long-term life of a laptop is the battery temperature. As long as the battery is kept at an acceptable temperature, and the CPU and GPU temperature aren't close to the junction temperature for the silicon, there's nothing to worry about. The junction temperature for your CPU is 100C by the way.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, gunnarre said:

These temperatures sound normal for a laptop - are they the GPU temperature? Different projects tax the GPU to different levels, based on both the molecule size and the calculations done, so it's normal that the power use and temperature will vary between projects. I think the most important thing to watch for long-term life of a laptop is the battery temperature. As long as the battery is kept at an acceptable temperature, and the CPU and GPU temperature aren't close to the junction temperature for the silicon, there's nothing to worry about. The junction temperature for your CPU is 100C by the way.

Alright, thanks! I found that the Tjmax for my GPU is 95c (https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1660-ti/) and yeah, there's a fair amount of margin between that and how hot my laptop's GPU gets.

Desktop 1 : Ryzen 5 3600 (O/C to 4Ghz all-core) | Gigabyte B450M-DS3H | 24GB DDR4-2400 Crucial(O/C to 2667) | GALAX RTX 2060 6GB | CoolerMaster MWE 650 Gold

 

Desktop 2 : i5 10400 | 32GB DDR4-3200(@ 2667Mhz) |  EVGA GTX 1070 SC 8 GB | Corsair CV450M

                        

Laptop : ASUS ROG Strix G17 : i7-10750H, 16GB RAM, GTX 1660Ti 6GB(90W), 1TB NVMe SSD

 

Yoga 3 14 - i7-5500U, 8GB RAM, GeForce GT 940M, 256GB SSD

Link to comment
Share on other sites

Link to post
Share on other sites

I'm in the process of building a new gaming PC for someone I know. All of the parts have been ordered, but some aren't going to be here until next week. The GPU arrived yesterday though, and I've been testing it a bit while waiting for the rest of the stuff. Yes, I do have permission from the person who is getting these parts. 

 

I was going to test the RTX 3080 in my main desktop (the Ryzen 7 2700 machine in my signature), but the 450W PSU in that system isn't large enough, and the layout of the case makes it hard to disconnect the CPU power cable. So, I did the next best thing: I paired this brand new RTX 3080 with an old Z68 motherboard and a Core i7-2600. I'm powering everything with a new Corsair RM850x PSU, and it's all installed inside an old Antec case from 2002. I'm very impressed with the folding performance of this RTX 3080, and seeing it in person is making me contemplate purchasing one for myself. 

 

Just figured I'd share some RTX 3080 folding goodness with you all while I have the chance. I won't be able to use this card during the Summer Folding Sprint, but that's alright. 

 

1145421625_3080FAH.thumb.PNG.880d81eb0bcb9ec2bdf473e42ec71db0.PNG

Phobos: AMD Ryzen 7 2700, 16GB 3000MHz DDR4, ASRock B450 Steel Legend, 8GB Nvidia GeForce RTX 2070, 2GB Nvidia GeForce GT 1030, 1TB Samsung SSD 980, 450W Corsair CXM, Corsair Carbide 175R, Windows 10 Pro

 

Polaris: Intel Xeon E5-2697 v2, 32GB 1600MHz DDR3, ASRock X79 Extreme6, 12GB Nvidia GeForce RTX 3080, 6GB Nvidia GeForce GTX 1660 Ti, 1TB Crucial MX500, 750W Corsair RM750, Antec SX635, Windows 10 Pro

 

Pluto: Intel Core i7-2600, 32GB 1600MHz DDR3, ASUS P8Z68-V, 4GB XFX AMD Radeon RX 570, 8GB ASUS AMD Radeon RX 570, 1TB Samsung 860 EVO, 3TB Seagate BarraCuda, 750W EVGA BQ, Fractal Design Focus G, Windows 10 Pro for Workstations

 

York (NAS): Intel Core i5-2400, 16GB 1600MHz DDR3, HP Compaq OEM, 240GB Kingston V300 (boot), 3x2TB Seagate BarraCuda, 320W HP PSU, HP Compaq 6200 Pro, TrueNAS CORE (12.0)

Link to comment
Share on other sites

Link to post
Share on other sites

A few posts recently, about old CPUs, and then the 3080 above, have gotten me thinking.

 

QRB obviously skews POINTS production towards modern GPUs.

 

However...for an "amount of work per watt" measure (not points per watt)...do modern GPUs still win, or with the amount of watts they use, are they just faster for more points because of QRB?

 

I ask because something like the 3080 above is impressive, and some old Xeons (as I've often used from my old servers) use stupid electricity without being fast, but things like the Ryzen 5600 running 8 CPU threads will still put out several hundred thousand PPD and several work units per day...with total system power at the wall being only 60-85 watts (less if you undervolt).  Is there anywhere that tracks this sort of thing, or just points with QRB per watt?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, justpoet said:

A few posts recently, about old CPUs, and then the 3080 above, have gotten me thinking.

 

QRB obviously skews POINTS production towards modern GPUs.

 

However...for an "amount of work per watt" measure (not points per watt)...do modern GPUs still win, or with the amount of watts they use, are they just faster for more points because of QRB?

 

I ask because something like the 3080 above is impressive, and some old Xeons (as I've often used from my old servers) use stupid electricity without being fast, but things like the Ryzen 5600 running 8 CPU threads will still put out several hundred thousand PPD and several work units per day...with total system power at the wall being only 60-85 watts (less if you undervolt).  Is there anywhere that tracks this sort of thing, or just points with QRB per watt?

I track this obsessively. I'm currently profiling a RTX 2060 to find where it's peak efficiency is (at which GPU Clock peak PPD/W occurs).

 

I believe LAR Systems has some comparisons at his site. Kind of Apples & Oranges though as he can't track power but can only estimate it.

 

In general more modern GPUs and CPUs have better Folding Efficiency.This makes sense as as the Process Node Shrinks and the Instructions per Clock (IPC) increase more work can get done using the same amount of energy.

 

I don't run F@H on my Ryzen CPUs (I run BOINC (WCG/Einstein)) but I have noticed that under-clocking the CPUs from their base+boost clocks to 2.8GHz saves significantly on power with only a minor reduction in Yield.

 

The same is true of GPUs. Running a GPU at lower Clocks than the Default Base+Boost behavior significantly decreases the Power with only a minimal decrease in Yield.

 

I suspect a Ryzen 5 5600x running at a undervolt would out-perform many of the old E5 Xeons with a minimal Power Draw. I see the alure of running "Big Iron" but I just can't afford the Power these days to run these 24x7x365

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, justpoet said:

However...for an "amount of work per watt" measure (not points per watt)...do modern GPUs still win, or with the amount of watts they use, are they just faster for more points because of QRB?

Modern hardware definitely wins, roughly you can go with twice the efficiency each time you go to a newer gen, at least for Nvidia GPUs.

Having a 3080 I don't even fold with my 1070, yes the 3080 draws twice as much but it does about 4-6x more points/work in the same time.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/13/2022 at 4:45 PM, Gorgon said:

With GPUs prices approaching MSRP I bit the bullet and bought a Asus TUF 3070ti today at a local bricks and mortar store at just a smidge over MSRP. I have it installed and am profiling it to see how efficiently it can run.

3070ti_Efficiency.jpg.b67e5e14b5ab0a988ce14491c8d48ca5.jpg

It appears that Ampere GPUs, unlike Pascal and Turing, fall off the cliff at lower power levels. In my case running a p18039 with a modest +50MHz graphics overclock the efficiency peaked around 150W and stayed pretty consistent until about 225W then dropping again at the top-end.

 

My previous recommendation for Pascal and Turing to run at the lowest Power Limit for Peak efficiency thus does not apply to Ampere where I would suggest you profile your GPU to figure out where it runs best.

 

I use HfM configured to average PPD values over the 3 previous frames and then after adjusting the Power-Limit wait for 6 frames to complete and then record the PPD. It is important though to use the observed GPU power draw from AfterBurner or nvidia-smi to analyze the results rather than the set (requested) power-limit as they will be different especially approaching the maximum. Though this GPU claims a maximum Power-limit of 350W I could not get it to draw above 278W on this WU.3070ti_PLvPPD.jpg.723aa161a9d6d76354618d1174aca832.jpg

Update. After observing Ampere's strange behavior when running at a fixed power-limit further investigation was done.

 

Using

nvidia-smi -i <X> -lgc <LowerClock>,<UpperClock>

we can force the GPU to a fixed Graphics Clock Speed and measure the Yield (PPD) and the average Power Draw (W) to Calculate the Efficiency (kPPD/W). In this manner we can run a GPU at much lower effective Power than we can just by using the Power-Limit settings. (Note it appears that Pascal and earlier GPUs do not support locking of Graphics Clocks to a fixed Frequency).

 

It was observed was that Turing GPUs, like Ampere, also fall off in Efficiency at lower Graphics Clocks. The difference, however, is that the lower Power Limit on Turing GPUs is set at the edge of the "hump" in the Clock/Efficiency curve whereas on Ampere the lower Power Limit is set well beneath the hump in the curve.

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Gorgon said:

I track this obsessively. I'm currently profiling a RTX 2060 to find where it's peak efficiency is (at which GPU Clock peak PPD/W occurs).

...

In general more modern GPUs and CPUs have better Folding Efficiency.This makes sense as as the Process Node Shrinks and the Instructions per Clock (IPC) increase more work can get done using the same amount of energy.

...

I suspect a Ryzen 5 5600x running at a undervolt would out-perform many of the old E5 Xeons with a minimal Power Draw. I see the alure of running "Big Iron" but I just can't afford the Power these days to run these 24x7x365

6 hours ago, Kilrah said:

Modern hardware definitely wins, roughly you can go with twice the efficiency each time you go to a newer gen, at least for Nvidia GPUs.

Yeah.  This is where I find any data so far, based on PPD.  But as mentioned, PPD is heavily affected by QRB, which I want to remove from the equation.  I also agree that more modern hardware will almost always be better, as my example showed.

 

My question is about work per watt though.  GPUs generally use a lot more wattage than CPUs, for example...but have much higher PPD outputs...largely affected by QRB.  If I were thinking only about efficiency of work per watt hour, rather than points per watt hour...so QRB no longer affects things...would GPUs still have the advantage?

 

I guess another way to think about it is....if I can take the same work unit and run it on a CPU vs a GPU...the GPU will get done faster, but use more watts while it does it.  Will the time savings allow for lower watt hour usage for that work unit, or will the cost of speeding along and the older process node make the cpu use less watt hours to finish the work unit, despite taking longer to do it?

 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, justpoet said:

Yeah.  This is where I find any data so far, based on PPD.  But as mentioned, PPD is heavily affected by QRB, which I want to remove from the equation.  I also agree that more modern hardware will almost always be better, as my example showed.

 

My question is about work per watt though.  GPUs generally use a lot more wattage than CPUs, for example...but have much higher PPD outputs...largely affected by QRB.  If I were thinking only about efficiency of work per watt hour, rather than points per watt hour...so QRB no longer affects things...would GPUs still have the advantage?

 

I guess another way to think about it is....if I can take the same work unit and run it on a CPU vs a GPU...the GPU will get done faster, but use more watts while it does it.  Will the time savings allow for lower watt hour usage for that work unit, or will the cost of speeding along and the older process node make the cpu use less watt hours to finish the work unit, despite taking longer to do it?

 

I tracked this and put it somewhere (CPU vs GPU), I'll reply soon with the data

Desktop 1 : Ryzen 5 3600 (O/C to 4Ghz all-core) | Gigabyte B450M-DS3H | 24GB DDR4-2400 Crucial(O/C to 2667) | GALAX RTX 2060 6GB | CoolerMaster MWE 650 Gold

 

Desktop 2 : i5 10400 | 32GB DDR4-3200(@ 2667Mhz) |  EVGA GTX 1070 SC 8 GB | Corsair CV450M

                        

Laptop : ASUS ROG Strix G17 : i7-10750H, 16GB RAM, GTX 1660Ti 6GB(90W), 1TB NVMe SSD

 

Yoga 3 14 - i7-5500U, 8GB RAM, GeForce GT 940M, 256GB SSD

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, justpoet said:

I guess another way to think about it is....if I can take the same work unit and run it on a CPU vs a GPU...the GPU will get done faster, but use more watts while it does it.  Will the time savings allow for lower watt hour usage for that work unit, or will the cost of speeding along and the older process node make the cpu use less watt hours to finish the work unit, despite taking longer to do it?

Just because you can run a GPU at 250 - 500W does not mean you should. In fact their efficiencies are usually greatest when you Power-Limit them to the lowest available settings or, in some cases, lower the Graphics Clock below what you'd reach with a minimum Power Limit for further gains.

 

A GPU is much more efficient that a CPU as long as the calculations can be done on it. Most of my GPUs running Folding at Home are running at about 100W for Turing (2070 Supers) and 125W  for Ampere (3070ti). The 2070 Supers average about 2.7MPPD and the 3070 3.8MPPD. I don't have numbers for CPUs but take your yield and divide it by the CPU Power consumption for comparison. I can pretty much guarantee it will be a couple of orders of Magnitude lower.

 

Here's a plot of Efficiency for a project 18202 (Alzheimer's) WU at various Graphics Clocks for my 2060 Super:

2060s_Gclock_Eff.jpg.d0284b64568a60be058fe99763c0f240.jpg

 

This shows the peak efficiency for this WU at about 1250MHz.

 

Here's another view showing the GPUs average power at the efficiencies.

2060s_Power_Eff.jpg.ae7de0a532852c4c02542453a962e75e.jpg

 

This shows that the GPU is most efficient about 80W of Power Draw. The Default Power-Limit for this model is, however, 175 W - waay down the efficiency curve and the Minimum Power-Limit is 125W which is still down from the peak.

 

I run this GPU at a Graphics Clock capped at 1260MHz with a 125W Power-Limit to keep it at or near peak efficiency. It is currently running 2 Einstein@Home Gamma-ray Pulsar Binary Search WUs concurrently and only consuming 65W. Or about the same as a mid-tier Ryzen CPU.

 

The 5900x in the same system is down-clocked to 2.8GHz where it consumes 50W of Power and is running 10 concurrent WUs of the same project as well 1 thread feeding 1 Folding Slot in the system and 2 threads used for the Einstein GPU Tasks. (The motherboard in this system has a dead memory channel so above 14 or so threads the Memory Access from Einstein BOINC Jobs starts "thrashing" and performance drops dramatically - This isn't an issue with WCG Mapping Cancer Markers Tasks - these seem to access memory much less often)

 

The CPU Tasks take about 7:21 hours to complete and yield 693 Points whereas the GPU jobs take about 11:35 minutes each and yield 3465 points 😱

 

Einstein states that the jobs in the tasks are identical, it's just the GPU tasks have more jobs bundled per task hence the point discrepancy.

 

From the above numbers we can calculate the relative efficiency of the CPU vs. GPU.

CPU:      453 Points/W

GPU: 13,943 Points/W

 

or the GPU is 30.8 times more efficient than the CPU.

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

Little watercooling upgrade on my fah server! I can now completely disassemble this system without draining the loop. 😄

20220601_155546.thumb.jpg.791187ce1f60f7da4951f58fd8d2c311.jpg

20220601_163620.thumb.jpg.cbbeb0667d54ee4bc5ca58b2180c35cd.jpg

 

Also have a 2080 waterblocked and ready, but I'm missing a couple 90 degree fittings to get it installed. Should arrive tomorrow though!

20220601_130326.thumb.jpg.e3646ff733a6a0eab72867570fc8da2f.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, justpoet said:

A few posts recently, about old CPUs, and then the 3080 above, have gotten me thinking.

 

QRB obviously skews POINTS production towards modern GPUs.

 

However...for an "amount of work per watt" measure (not points per watt)...do modern GPUs still win, or with the amount of watts they use, are they just faster for more points because of QRB?

So last year during the Folding event, you remember we were complaining about getting some monster CPU WUs with 80k base credit? They took 2-3 days to complete on both my Ryzen 5 3600 and i7-10750H. The i7 (remember, this is a mobile 10th gen i7 heavily undervolted and running at/slightly below base clock) during that time had a power draw of 30W.  Let's assume a best case of 48 hours to finish, although it was definitely closer to 2.5 days than 2.

 

My 1660Ti mobile (to be fair, also undervolted and one of the most power-efficient prev-gen GPUs ) is doing a WU with 70k base credit, it's currently drawing 68W, let's round that up to 70W. It'll finish it in...let me check...4 hours, and it's 20% done, so it should have a runtime of 5 hours, let's say it'll definitely finish in 6 hours from start to finish.

 

Power consumed by mobile i7 to finish ~80k base credit WU : 0.72kWh (Consumption for a whole day) * 2 = 1.44kWh

 

Power consumed by mobile 1660Ti to finish ~70k base credit WU : 1.68kWh * 0.25 (6 hours - 1/4th of a day) = 0.42kWh

 

And remember, this isn't considering the extra electricity wasted to keep the rest of the computer active for 42 hours longer, slowdowns if you so much as opened a web browser, etc. This math would skew even further towards GPUs if I considered my desktop R5 3600 + RTX 2060 - the Ryzen 3600 idles at somewhere between 20-30W.

Desktop 1 : Ryzen 5 3600 (O/C to 4Ghz all-core) | Gigabyte B450M-DS3H | 24GB DDR4-2400 Crucial(O/C to 2667) | GALAX RTX 2060 6GB | CoolerMaster MWE 650 Gold

 

Desktop 2 : i5 10400 | 32GB DDR4-3200(@ 2667Mhz) |  EVGA GTX 1070 SC 8 GB | Corsair CV450M

                        

Laptop : ASUS ROG Strix G17 : i7-10750H, 16GB RAM, GTX 1660Ti 6GB(90W), 1TB NVMe SSD

 

Yoga 3 14 - i7-5500U, 8GB RAM, GeForce GT 940M, 256GB SSD

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, RollinLower said:

Little watercooling upgrade on my fah server! I can now completely disassemble this system without draining the loop. 😄

Nice!

 

I see your using a pair of the Noctua NA-FC1 PWM Controllers to get around SuperMicro's abysmal Fan Control. I have a couple of them and they're really handy.

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Gorgon said:

Nice!

 

I see your using a pair of the Noctua NA-FC1 PWM Controllers to get around SuperMicro's abysmal Fan Control. I have a couple of them and they're really handy.

jup, i defaulted to these a long while ago when i got my first server grade motherboard in here. fan control on server grade hardware is almost always just a pain!

i just have the fans set to a speed where under full load the system is happy, but the fans stay quiet enough for 24/7 operation. set and forget basically, no matter what WU's i get the system stays cool enough.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, RollinLower said:

jup, i defaulted to these a long while ago when i got my first server grade motherboard in here. fan control on server grade hardware is almost always just a pain!

i just have the fans set to a speed where under full load the system is happy, but the fans stay quiet enough for 24/7 operation. set and forget basically, no matter what WU's i get the system stays cool enough.

Yeah, an Arduino-based PWM Controller with one or 2 inputs for NTC Thermistors would be perfect. I'm surprised no one has made one yet as Arduinos can natively read NTC thermistors through their A/D interfaces and support PWM.

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, RollinLower said:

jup, i defaulted to these a long while ago when i got my first server grade motherboard in here. fan control on server grade hardware is almost always just a pain!

i just have the fans set to a speed where under full load the system is happy, but the fans stay quiet enough for 24/7 operation. set and forget basically, no matter what WU's i get the system stays cool enough.

oh yeah or grat rad but crap pwm controller.. cough asus

MSI x399 sli plus  | AMD theardripper 2990wx all core 3ghz lock |Thermaltake flo ring 360 | EVGA 2080, Zotac 2080 |Gskill Ripjaws 128GB 3000 MHz | Corsair RM1200i |150tb | Asus tuff gaming mid tower| 10gb NIC

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RollinLower said:

so here's a question i've never though about before: is there actually a way in the Linux fahclient to see current PPD production without a desktop environment?

FAHClient --send-command queue-info
FAHClient --send-command ppd | grep ^[0-9][0-9][0-9]. | tr -d "/n"

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

Hmmm.  Am I the only one reporting back to LAR systems while folding on an AMD 6800XT and a 6900XT in Linux.  When I stop, the Linux line goes away.  The dot is me firing them back up in preparation for June's folding sprint.

 

BTW, the long linux line is only the 6800XT.  I fire up the 6900XT for the sprints only, which explains why the dot is way over 5 million points.

 

As for the absence, I was away on an extended trip and shut down all of the folding rigs before I left.

Crop Screenshot from 2022-06-04 09-52-54.png

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Starman57 said:

Hmmm.  Am I the only one reporting back to LAR systems while folding on an AMD 6800XT and a 6900XT in Linux.  When I stop, the Linux line goes away.  The dot is me firing them back up in preparation for June's folding sprint.

 

BTW, the long linux line is only the 6800XT.  I fire up the 6900XT for the sprints only, which explains why the dot is way over 5 million points.

 

As for the absence, I was away on an extended trip and shut down all of the folding rigs before I left.

Crop Screenshot from 2022-06-04 09-52-54.png

could be, i fold on linux on all my cards, but without a desktop environment so no browser to run the extension in. 

i wish there was a way to contribute to this without a browser.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RollinLower said:

could be, i fold on linux on all my cards, but without a desktop environment so no browser to run the extension in. 

i wish there was a way to contribute to this without a browser.

Forward the F@H web port (7396) via SSH to a Windows system and then run the Browser with extension on Windows.

 

On the Windows System install PuTTY. Configure a PuTTY connection to the Linux Host:

PuTTY_Fwd.jpg.fba5363b88ddda2b678259a7e4bc2a2f.jpg

Don't forget to click "Add"

 

Run this PuTTY shortcut.

 

Open https://client.foldingathome.org or https://127.0.0.1:7396

 

See the Results from the Linux System:

DarkResults.thumb.jpg.7c47e49536e45bd1517143a58dd88262.jpg

 

Works like a charm. I can see the web client using the default URL. You just have to make sure that if Advanced Control is installed on the Windows System that it is NOT running.

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×