Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
On 5/22/2021 at 2:53 PM, Gorgon said:

I feel ya.

 

I actually caved this morning to the whining and shut down all systems and turned on the A/C at 06:00. Even though it went down from 30C yesterday to 20C overnight the house was still at 28C and the basement at 30C at 06:00 with the forecast for 29C during the day today.

 

9 hours later and the house is at 26C but at least the humidity is way down and I managed to grab a few hours sleep.

 

You can use Ryzen Master but apparently you can also just use the BIOS settings for PBO: PPT, EDC & TDC as well as a negative voltage off-set to tame things down and let PBO do it's thing but at better energy efficiency. I'll be playing with these in a little bit but I have to re-arrange a few systems and get lm-sensors working properly on a couple of them.

 

I favor Linux as it's just easier in so many ways to manage systems but you do lose some useful tools like Ryzen Master and HWinfo64.

I been messing a tad with Ryzen Master.  So far found a fairly decent starting point that restrains the 5800X done to around a 75-80W pull through it and still allows it to boost to around 4.2-4.3GHz across all its cores.  Considering it smacks my 1950X in single thread at just that setting, I think I will tweak around that.  Actually, came close to darn spitting distance to my 1950X in multi-threading, but note: that is a benchmark software.  I have to later throw the abuse of a 4K UHD re-encode with H.265 at it to see how it hangs with my 1950X.

 

If wanting to know the cooling being used on the chip, I'm using the monster Noctua NH-D15.

 

For the initial abuse runs I been using the Cinebench R23 and Prime95 small FFTs.

 

Since this is to be my main daily along with my laptop, Linux is a no go.  

On 5/22/2021 at 9:16 PM, Gorgon said:

If it’s a newer Asus motherboard then you might be getting screwed by the Multi Core Enhancement (or whatever it’s called on AMD) being enabled in the BIOS by default. Essentially the BIOS is configured to push the CPU hard

Hmmm, I was not aware of that.  I will later go look to see if this B550 has that on or not.

Just a nutty gal living near the Gulf.

Ya want fun, come down here, we got heat, humidity, and these big arse storms call hurricanes

F@H & BOINC Installation on Linux Guide

My CPU Army: 5800X, E5-2670V3, 1950X, 5960X J Batch, 10750H *lappy

My GPU Army:960 FTW @ 1551MHz, Titan XP, RTX 2070 Max-Q *lappy

My Console Brigade: Gamecube, Wii, Wii U, Switch, PS2 Fatty, Xbox One S, Xbox One X

My Tablet Squad: iPad 9.7" (2018 model), Samsung Tab S, Nexus 7 (1st gen)

3D Printer Unit: Prusa MK3S, Prusa Mini, EPAX E10

VR Headset: Quest 2

 

Hardware lost to Kevdog's Law of Folding

OG Titan, 5960X, ThermalTake BlackWidow 850 Watt PSU

Link to post
Share on other sites
2 hours ago, Ithanul said:

I been messing a tad with Ryzen Master.  So far found a fairly decent starting point that restrains the 5800X done to around a 75-80W pull through it and still allows it to boost to around 4.2-4.3GHz across all its cores.  Considering it smacks my 1950X in single thread at just that setting, I think I will tweak around that.  Actually, came close to darn spitting distance to my 1950X in multi-threading, but note: that is a benchmark software.  I have to later throw the abuse of a 4K UHD re-encode with H.265 at it to see how it hangs with my 1950X.

 

If wanting to know the cooling being used on the chip, I'm using the monster Noctua NH-D15.

 

For the initial abuse runs I been using the Cinebench R23 and Prime95 small FFTs.

 

Since this is to be my main daily along with my laptop, Linux is a no go.  

Hmmm, I was not aware of that.  I will later go look to see if this B550 has that on or not.

What I found in my testing that at higher cTDP settings on a couple of CPUs I could run a -0.1v Vcore offset which significantly reduced the power draw but at lower cTDP settings the offset needed to be reduced.

 

But with that beefy cooler you should be able to run the cpu at the motherboard stock settings if you’re willing to put up with it running in the 80s under load.

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access

Systems:

dcn01: Fractal Meshify C; Gigabyte Aorus ax570 Master; Ryzen 9 5950x; EVGA 240 CLC; 2 x 16GB DDR4-3200; 512GB NVMe; EVGA RTX 2070 Super XC Hybrid; Corsair TX750M

dcn02: Fractal Define S; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2070 XC Gaming; Corsair TX650M

dcn03: Fractal Meshify S2; Gigabyte Aorus ax570 Pro WiFi; Ryzen 9 3900x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2060 XC Gaming; Corsair TX650M

dcn04: Fractal Meshify S2; Gigabyte z370 Gaming 5; i9-9900K; EVGA 280 CLC; 4 x 4GB DDR4-2400; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2060 XC Ultra Gaming; Corsair TX650M

dcn05: Fractal Define R4; Gigabyte ax370 Gaming K7; Ryzen 7 2700x; Hyper 212Evo e/w Noctua NF-A12 iPPc 3000 PWM; 2 x 8GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA GTX 1660ti XC Ultra Gaming; Corsair TX650M

dcn06: Fractal Define C; Gigabyte ax570 Gaming X; Ryzen 7 2700; 2 x 4GB DDR4-2400; Samsung 250GB SSD; Gigabyte GTX 1060 6GB; Corsair TX550M

dcn10: Supermicro SC731; Gigabyte Aorus b450m; Ryzen 5 2700; 2 x 8GB DDR4-2400; Cruical 64GB SSD; SuperMicro 300W Bronze

dcn11: Fractal Core 1100; Gigabyte Aorus b450m; Ryzen 5 2600x; 2 x 8GB DDR4-3200; Adata 256GB NVMe; Seagate ES 7200rpm 3TB HDD; EVGA GTX 1070ti SC Gaming; Corsair CX500

dcn12: Fractal Focus G; Gigabyte z370 SLI; Pentium G5500; 1 x 4GB DDR4-2400; Samsung 256GB SSD; Corsair CX450M

dcn18: Acer E3400; AMD Athlon II x64; 2 x 2GB DDR3-1600; 240GB Spinning Rust

dcn19: NUC6i3SYK; Intel i3-6100U; 2 x 8GB DDR4-2133; Samsung 120GB NVMe

Link to post
Share on other sites

will be a 5-day challenge celebrating the 16th anniversary of the launch of PrimeGrid on BOINC. The challenge will be offered on the ESP-LLR application, beginning 12 June 13:00 UTC and ending 17 June 13:00 UTC.
To participate in the Challenge, please select only the Extended Sierpinski Problem LLR (ESP) project in your PrimeGrid preferences section

lets get a few prime ltt for this event!

Link to post
Share on other sites
On 6/10/2021 at 8:33 AM, dogwitch said:

will be a 5-day challenge celebrating the 16th anniversary of the launch of PrimeGrid on BOINC. The challenge will be offered on the ESP-LLR application, beginning 12 June 13:00 UTC and ending 17 June 13:00 UTC.
To participate in the Challenge, please select only the Extended Sierpinski Problem LLR (ESP) project in your PrimeGrid preferences section

lets get a few prime ltt for this event!

I do plan on participating. I can't remember the last time i ran ESP tasks but would it be best to just throw all of my threads on one at a task or do multiple at a time?

Link to post
Share on other sites
1 hour ago, roxasthehunter said:

I do plan on participating. I can't remember the last time i ran ESP tasks but would it be best to just throw all of my threads on one at a task or do multiple at a time?

Well looks like even with 14 threads, my 5800x is gonna take five hours per task so that's probably for the best

Link to post
Share on other sites
2 hours ago, roxasthehunter said:

Well looks like even with 14 threads, my 5800x is gonna take five hours per task so that's probably for the best

if got a gpu. i suggest a easy wu for it also with this.

Link to post
Share on other sites
On 6/6/2021 at 6:10 PM, Captainmarino said:

Hey, everyone. I've been having issues with my R9 280 and F@H so I've decided to stop fussing with it and sign it up for BOINC. I tried to get PrimeGrid working on the GPU but it doesn't seem to download any work for it. I set the preferences to download a day's worth and clicked update but still nothing. Anyone have any insight? Thanks in advance for your help.

Can't help with your prime grid issue but with an r9 280 you should look at the milkyway@home project as your card has high double precision compute capability. Some of the older amd cards are great for that project, my old HD5850 that cost me $15 outperforms a gtx 1080ti because of its double precision compute capability.

Link to post
Share on other sites
On 6/13/2021 at 2:27 AM, Ragnarsdad said:

Can't help with your prime grid issue but with an r9 280 you should look at the milkyway@home project as your card has high double precision compute capability. Some of the older amd cards are great for that project, my old HD5850 that cost me $15 outperforms a gtx 1080ti because of its double precision compute capability.

Thanks for the suggestion! I managed to figure out the problem. There was a problem with the preferences sticking between the application itself and online. It's been crunching away for a few days now with none of the issues I was seeing with F@H.

 

Do you happen to know if I should expect to get decent PPD from MW@H with the 280? I just decided to go with PrimeGrid when I dropped F@H because PG was offering good PPD during the Pentathlon and I don't have any other experience with BOINC.

 

Thanks again for your help!

Link to post
Share on other sites
14 hours ago, Captainmarino said:

Thanks for the suggestion! I managed to figure out the problem. There was a problem with the preferences sticking between the application itself and online. It's been crunching away for a few days now with none of the issues I was seeing with F@H.

 

Do you happen to know if I should expect to get decent PPD from MW@H with the 280? I just decided to go with PrimeGrid when I dropped F@H because PG was offering good PPD during the Pentathlon and I don't have any other experience with BOINC.

 

Thanks again for your help!

Glad you got it sorted out.  Managing preferences in BOINC is a bit of an art with a learning curve. It’s always blatantly obvious why they failed … after you figure it out. 😀

 

I don’t know about AMD tasks in BOINC but OpenPandemics on World Community Grid has GPU tasks in Beta if you want to continue contributing to COVID research but they’re somewhat limited.

 

Einstein also has GPU work units as does Prime Grid. The beauty of BOINC is you can select several projects at the same time and try them out.

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access

Systems:

dcn01: Fractal Meshify C; Gigabyte Aorus ax570 Master; Ryzen 9 5950x; EVGA 240 CLC; 2 x 16GB DDR4-3200; 512GB NVMe; EVGA RTX 2070 Super XC Hybrid; Corsair TX750M

dcn02: Fractal Define S; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2070 XC Gaming; Corsair TX650M

dcn03: Fractal Meshify S2; Gigabyte Aorus ax570 Pro WiFi; Ryzen 9 3900x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2060 XC Gaming; Corsair TX650M

dcn04: Fractal Meshify S2; Gigabyte z370 Gaming 5; i9-9900K; EVGA 280 CLC; 4 x 4GB DDR4-2400; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2060 XC Ultra Gaming; Corsair TX650M

dcn05: Fractal Define R4; Gigabyte ax370 Gaming K7; Ryzen 7 2700x; Hyper 212Evo e/w Noctua NF-A12 iPPc 3000 PWM; 2 x 8GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA GTX 1660ti XC Ultra Gaming; Corsair TX650M

dcn06: Fractal Define C; Gigabyte ax570 Gaming X; Ryzen 7 2700; 2 x 4GB DDR4-2400; Samsung 250GB SSD; Gigabyte GTX 1060 6GB; Corsair TX550M

dcn10: Supermicro SC731; Gigabyte Aorus b450m; Ryzen 5 2700; 2 x 8GB DDR4-2400; Cruical 64GB SSD; SuperMicro 300W Bronze

dcn11: Fractal Core 1100; Gigabyte Aorus b450m; Ryzen 5 2600x; 2 x 8GB DDR4-3200; Adata 256GB NVMe; Seagate ES 7200rpm 3TB HDD; EVGA GTX 1070ti SC Gaming; Corsair CX500

dcn12: Fractal Focus G; Gigabyte z370 SLI; Pentium G5500; 1 x 4GB DDR4-2400; Samsung 256GB SSD; Corsair CX450M

dcn18: Acer E3400; AMD Athlon II x64; 2 x 2GB DDR3-1600; 240GB Spinning Rust

dcn19: NUC6i3SYK; Intel i3-6100U; 2 x 8GB DDR4-2133; Samsung 120GB NVMe

Link to post
Share on other sites
6 hours ago, Gorgon said:

Glad you got it sorted out.  Managing preferences in BOINC is a bit of an art with a learning curve. It’s always blatantly obvious why they failed … after you figure it out. 😀

 

I don’t know about AMD tasks in BOINC but OpenPandemics on World Community Grid has GPU tasks in Beta if you want to continue contributing to COVID research but they’re somewhat limited.

 

Einstein also has GPU work units as does Prime Grid. The beauty of BOINC is you can select several projects at the same time and try them out.

I'll try the GPU tasks on WCG and E@H after I get a couple days of MW@H under my belt to see how those work out for me. Thanks for the suggestions. So far, each of the MW@H tasks takes under 30s to complete. LOL. It's crazy.

 

On a side note, it was interesting to find out that MW@H is run by Rensselaer Polytechnic Institute. I grew up about 10 miles north of it and now live about 45 minutes north. I actually had a chance to go there with a scholarship too but... 10 miles?... that hit a little too close to home 😁

Link to post
Share on other sites
9 hours ago, Captainmarino said:

I'll try the GPU tasks on WCG and E@H after I get a couple days of MW@H under my belt to see how those work out for me. Thanks for the suggestions. So far, each of the MW@H tasks takes under 30s to complete. LOL. It's crazy.

 

On a side note, it was interesting to find out that MW@H is run by Rensselaer Polytechnic Institute. I grew up about 10 miles north of it and now live about 45 minutes north. I actually had a chance to go there with a scholarship too but... 10 miles?... that hit a little too close to home 😁

Worthy of note is that for an r9 280x (same card as my old hd7970) you are best running multiple tasks at once for mw@h, I used to run 3 at once and it would triple my output. I will find a guide on how to do it tomorrow, really easy and well worth doing.

Link to post
Share on other sites
1 hour ago, Ragnarsdad said:

Worthy of note is that for an r9 280x (same card as my old hd7970) you are best running multiple tasks at once for mw@h, I used to run 3 at once and it would triple my output. I will find a guide on how to do it tomorrow, really easy and well worth doing.

Very good to know! Thanks! I didn't know you could do something like that.

Link to post
Share on other sites
On 6/15/2021 at 5:20 PM, Ragnarsdad said:

Worthy of note is that for an r9 280x (same card as my old hd7970) you are best running multiple tasks at once for mw@h, I used to run 3 at once and it would triple my output. I will find a guide on how to do it tomorrow, really easy and well worth doing.

@Captainmarino

 

See these instructions.

 

Create a file named app_config.xml in the project's directory.

<app_config>
  <app>
  <name>the name from the file</name>
    <gpu_versions>
        <cpu_usage>1.0</cpu_usage>
        <gpu_usage>0.33333</gpu_usage>
    </gpu_versions>
  </app>
</app_config>

Season to taste the fractional CPU and GPU per project.
 

once it's created select "Read Config Files" from "Options" menu in BOINCmanager

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access

Systems:

dcn01: Fractal Meshify C; Gigabyte Aorus ax570 Master; Ryzen 9 5950x; EVGA 240 CLC; 2 x 16GB DDR4-3200; 512GB NVMe; EVGA RTX 2070 Super XC Hybrid; Corsair TX750M

dcn02: Fractal Define S; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2070 XC Gaming; Corsair TX650M

dcn03: Fractal Meshify S2; Gigabyte Aorus ax570 Pro WiFi; Ryzen 9 3900x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2060 XC Gaming; Corsair TX650M

dcn04: Fractal Meshify S2; Gigabyte z370 Gaming 5; i9-9900K; EVGA 280 CLC; 4 x 4GB DDR4-2400; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2060 XC Ultra Gaming; Corsair TX650M

dcn05: Fractal Define R4; Gigabyte ax370 Gaming K7; Ryzen 7 2700x; Hyper 212Evo e/w Noctua NF-A12 iPPc 3000 PWM; 2 x 8GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA GTX 1660ti XC Ultra Gaming; Corsair TX650M

dcn06: Fractal Define C; Gigabyte ax570 Gaming X; Ryzen 7 2700; 2 x 4GB DDR4-2400; Samsung 250GB SSD; Gigabyte GTX 1060 6GB; Corsair TX550M

dcn10: Supermicro SC731; Gigabyte Aorus b450m; Ryzen 5 2700; 2 x 8GB DDR4-2400; Cruical 64GB SSD; SuperMicro 300W Bronze

dcn11: Fractal Core 1100; Gigabyte Aorus b450m; Ryzen 5 2600x; 2 x 8GB DDR4-3200; Adata 256GB NVMe; Seagate ES 7200rpm 3TB HDD; EVGA GTX 1070ti SC Gaming; Corsair CX500

dcn12: Fractal Focus G; Gigabyte z370 SLI; Pentium G5500; 1 x 4GB DDR4-2400; Samsung 256GB SSD; Corsair CX450M

dcn18: Acer E3400; AMD Athlon II x64; 2 x 2GB DDR3-1600; 240GB Spinning Rust

dcn19: NUC6i3SYK; Intel i3-6100U; 2 x 8GB DDR4-2133; Samsung 120GB NVMe

Link to post
Share on other sites
14 hours ago, Captainmarino said:

Very good to know! Thanks! I didn't know you could do something like that.

Ok so for Milkyway@home with an R9 280x you need to do the following: (BTW i don't know how good you are at this sort of thing so appologies if i seem like i am being a bit simple)

 

Create a new blank notepad document on your desktop, save it with the name App_config.XML

 

Copy and paste this into the new document 

 

<app_config>
<app>
<name>milkyway</name>
<max_concurrent>0</max_concurrent>
<gpu_versions>
<gpu_usage>0.33</gpu_usage>
<cpu_usage>0.33</cpu_usage>
</gpu_versions>
</app>
</app_config>

 

Once saved copy the file into the Boinc data directory for the MW@H project, this is usually something like C:\ProgramData\BOINC\projects\milkyway.cs.rpi.edu_milkyway

(you may have to unhide folders to be able to see it)

 

Finally restart the boinc client

 

At this point you should be able to see three tasks running at once. i tried mine with more but found that after three the benefits dropped off significantly. the advantage of doing this is that your card stays at a constant workload so it isnt ramping up and down anywhere near as much plus of course much more work completed. 

 

if you get stuck with anything let me know.

Link to post
Share on other sites
1 hour ago, Ragnarsdad said:

Ok so for Milkyway@home with an R9 280x you need to do the following: (BTW i don't know how good you are at this sort of thing so appologies if i seem like i am being a bit simple)

 

Create a new blank notepad document on your desktop, save it with the name App_config.XML

 

Copy and paste this into the new document 

 

<app_config>
<app>
<name>milkyway</name>
<max_concurrent>0</max_concurrent>
<gpu_versions>
<gpu_usage>0.33</gpu_usage>
<cpu_usage>0.33</cpu_usage>
</gpu_versions>
</app>
</app_config>

 

Once saved copy the file into the Boinc data directory for the MW@H project, this is usually something like C:\ProgramData\BOINC\projects\milkyway.cs.rpi.edu_milkyway

(you may have to unhide folders to be able to see it)

 

Finally restart the boinc client

 

At this point you should be able to see three tasks running at once. i tried mine with more but found that after three the benefits dropped off significantly. the advantage of doing this is that your card stays at a constant workload so it isnt ramping up and down anywhere near as much plus of course much more work completed. 

 

if you get stuck with anything let me know.

That actually sounds perfect. Makes total sense. I'll try that out tonight. Thanks!

Link to post
Share on other sites
Posted (edited)
10 hours ago, Captainmarino said:

That actually sounds perfect. Makes total sense. I'll try that out tonight. Thanks!

Alright, I got it working just fine so thanks for that.

 

There's something I may not be understanding however; I see that I have 3 tasks running, but each one takes close to 4x as long as a single one does. Is it me or does that result in an overall decrease in task completion rate?

 

Correction:  it must have been an anomaly. By my current calculations, it's giving me about an 11% boost in tasks completed. Thanks again!

Edited by Captainmarino
Link to post
Share on other sites
7 hours ago, Captainmarino said:

Alright, I got it working just fine so thanks for that.

 

There's something I may not be understanding however; I see that I have 3 tasks running, but each one takes close to 4x as long as a single one does. Is it me or does that result in an overall decrease in task completion rate?

 

Correction:  it must have been an anomaly. By my current calculations, it's giving me about an 11% boost in tasks completed. Thanks again!

I don’t know about Milky Way but adjusting the CPU ratio might improve things. Try 2/3 of a CPU and see what happens.  It’s different for every project and it’s half the fun getting the system optimized.

 

Once I get my 100 year Diamond badge on OpenPandemics on World Community Grid I’ll likely move a couple of GPUs back over to Einstein@Home

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access

Systems:

dcn01: Fractal Meshify C; Gigabyte Aorus ax570 Master; Ryzen 9 5950x; EVGA 240 CLC; 2 x 16GB DDR4-3200; 512GB NVMe; EVGA RTX 2070 Super XC Hybrid; Corsair TX750M

dcn02: Fractal Define S; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2070 XC Gaming; Corsair TX650M

dcn03: Fractal Meshify S2; Gigabyte Aorus ax570 Pro WiFi; Ryzen 9 3900x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2060 XC Gaming; Corsair TX650M

dcn04: Fractal Meshify S2; Gigabyte z370 Gaming 5; i9-9900K; EVGA 280 CLC; 4 x 4GB DDR4-2400; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2060 XC Ultra Gaming; Corsair TX650M

dcn05: Fractal Define R4; Gigabyte ax370 Gaming K7; Ryzen 7 2700x; Hyper 212Evo e/w Noctua NF-A12 iPPc 3000 PWM; 2 x 8GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA GTX 1660ti XC Ultra Gaming; Corsair TX650M

dcn06: Fractal Define C; Gigabyte ax570 Gaming X; Ryzen 7 2700; 2 x 4GB DDR4-2400; Samsung 250GB SSD; Gigabyte GTX 1060 6GB; Corsair TX550M

dcn10: Supermicro SC731; Gigabyte Aorus b450m; Ryzen 5 2700; 2 x 8GB DDR4-2400; Cruical 64GB SSD; SuperMicro 300W Bronze

dcn11: Fractal Core 1100; Gigabyte Aorus b450m; Ryzen 5 2600x; 2 x 8GB DDR4-3200; Adata 256GB NVMe; Seagate ES 7200rpm 3TB HDD; EVGA GTX 1070ti SC Gaming; Corsair CX500

dcn12: Fractal Focus G; Gigabyte z370 SLI; Pentium G5500; 1 x 4GB DDR4-2400; Samsung 256GB SSD; Corsair CX450M

dcn18: Acer E3400; AMD Athlon II x64; 2 x 2GB DDR3-1600; 240GB Spinning Rust

dcn19: NUC6i3SYK; Intel i3-6100U; 2 x 8GB DDR4-2133; Samsung 120GB NVMe

Link to post
Share on other sites
On 6/17/2021 at 2:41 AM, Captainmarino said:

Alright, I got it working just fine so thanks for that.

 

There's something I may not be understanding however; I see that I have 3 tasks running, but each one takes close to 4x as long as a single one does. Is it me or does that result in an overall decrease in task completion rate?

 

Correction:  it must have been an anomaly. By my current calculations, it's giving me about an 11% boost in tasks completed. Thanks again!

Glad to hear it is going well, for comparison my GTX980 takes almost 4 minutes per work unit running one at a time. AMD cards really are the way to go for double precision tasks. At the moment i think MW@H is the only project using it but there may be more again in the future.

 

Link to post
Share on other sites
5 hours ago, Ragnarsdad said:

Glad to hear it is going well, for comparison my GTX980 takes almost 4 minutes per work unit running one at a time. AMD cards really are the way to go for double precision tasks. At the moment i think MW@H is the only project using it but there may be more again in the future.

 

Wow! You're not kidding. The R9 280 was taking 31s per WU when running one at a time. While running at 3x, I think it's around 41 or 43s. I have it splitting time now between MW@H and PG because why not? 🙂

Link to post
Share on other sites
  • 2 weeks later...
On 6/9/2021 at 11:33 PM, Gorgon said:

What I found in my testing that at higher cTDP settings on a couple of CPUs I could run a -0.1v Vcore offset which significantly reduced the power draw but at lower cTDP settings the offset needed to be reduced.

 

But with that beefy cooler you should be able to run the cpu at the motherboard stock settings if you’re willing to put up with it running in the 80s under load.

The high temps at stock settings is currently with that cooler.

 

I will probably just leave the 5800X where it locks in at 4.3GHz since that keeps the chip around in the 70C range.

I may later tweak a bit more to see if I can get 4.5GHz without high temps.  Thinking I may just set two or so profiles depending on workload, since I don't need high clocks all the time.

Just a nutty gal living near the Gulf.

Ya want fun, come down here, we got heat, humidity, and these big arse storms call hurricanes

F@H & BOINC Installation on Linux Guide

My CPU Army: 5800X, E5-2670V3, 1950X, 5960X J Batch, 10750H *lappy

My GPU Army:960 FTW @ 1551MHz, Titan XP, RTX 2070 Max-Q *lappy

My Console Brigade: Gamecube, Wii, Wii U, Switch, PS2 Fatty, Xbox One S, Xbox One X

My Tablet Squad: iPad 9.7" (2018 model), Samsung Tab S, Nexus 7 (1st gen)

3D Printer Unit: Prusa MK3S, Prusa Mini, EPAX E10

VR Headset: Quest 2

 

Hardware lost to Kevdog's Law of Folding

OG Titan, 5960X, ThermalTake BlackWidow 850 Watt PSU

Link to post
Share on other sites
  • 2 weeks later...

Does anyone know how to configure BOINC to use only one of two nVidia GPUs in a computer for it's projects?

Desktop build : Ryzen 5 3600 (O/C to 4Ghz all-core) | Gigabyte B450M-DS3H | 16GB DDR4-2400 Crucial(O/C to 2667) | GALAX RTX 2060 6GB | Zotac GTX 1060 Mini 3GB (O/C) | CoolerMaster MWE 650 Gold

                        

Laptop : ASUS ROG Strix G17 : i7-10750H, 16GB RAM, Geforce 1660Ti 6GB(80W), 512GB SSD

Link to post
Share on other sites
4 hours ago, rkv_2401 said:

Does anyone know how to configure BOINC to use only one of two nVidia GPUs in a computer for it's projects?

By default BOINC will only use the fastest GPU it discovers, To bypass the default behavior you need to first configure BOINC to use all the GPUs then exclude the one(s) you don't want to use per project in the cc_config.xml file in the BOINC Data directory:

<cc_config>
  <options>
    <use_all_gpus>1</use_all_gpus>
    <exclude_gpu>
      <url>http://www.worldcommunitygrid.org/</url>
      <device_num>0</device_num>
      <type>NVIDIA</type>
    </exclude_gpu>
  </options>
</cc_config>

for example will exclude NVIDIA device 0 for World Community Grid.

 

Note you may have to re-read the config and/or Local Pref files from the Options menu in BOINC Manager and/or restart the BOINC client for the setting to take effect.

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access

Systems:

dcn01: Fractal Meshify C; Gigabyte Aorus ax570 Master; Ryzen 9 5950x; EVGA 240 CLC; 2 x 16GB DDR4-3200; 512GB NVMe; EVGA RTX 2070 Super XC Hybrid; Corsair TX750M

dcn02: Fractal Define S; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2070 XC Gaming; Corsair TX650M

dcn03: Fractal Meshify S2; Gigabyte Aorus ax570 Pro WiFi; Ryzen 9 3900x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2060 XC Gaming; Corsair TX650M

dcn04: Fractal Meshify S2; Gigabyte z370 Gaming 5; i9-9900K; EVGA 280 CLC; 4 x 4GB DDR4-2400; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA RTX 2060 XC Ultra Gaming; Corsair TX650M

dcn05: Fractal Define R4; Gigabyte ax370 Gaming K7; Ryzen 7 2700x; Hyper 212Evo e/w Noctua NF-A12 iPPc 3000 PWM; 2 x 8GB DDR4-3200; 128GB NVMe; EVGA RTX 2070 Super XC Hybrid; EVGA GTX 1660ti XC Ultra Gaming; Corsair TX650M

dcn06: Fractal Define C; Gigabyte ax570 Gaming X; Ryzen 7 2700; 2 x 4GB DDR4-2400; Samsung 250GB SSD; Gigabyte GTX 1060 6GB; Corsair TX550M

dcn10: Supermicro SC731; Gigabyte Aorus b450m; Ryzen 5 2700; 2 x 8GB DDR4-2400; Cruical 64GB SSD; SuperMicro 300W Bronze

dcn11: Fractal Core 1100; Gigabyte Aorus b450m; Ryzen 5 2600x; 2 x 8GB DDR4-3200; Adata 256GB NVMe; Seagate ES 7200rpm 3TB HDD; EVGA GTX 1070ti SC Gaming; Corsair CX500

dcn12: Fractal Focus G; Gigabyte z370 SLI; Pentium G5500; 1 x 4GB DDR4-2400; Samsung 256GB SSD; Corsair CX450M

dcn18: Acer E3400; AMD Athlon II x64; 2 x 2GB DDR3-1600; 240GB Spinning Rust

dcn19: NUC6i3SYK; Intel i3-6100U; 2 x 8GB DDR4-2133; Samsung 120GB NVMe

Link to post
Share on other sites
11 hours ago, Gorgon said:

By default BOINC will only use the fastest GPU it discovers, To bypass the default behavior you need to first configure BOINC to use all the GPUs then exclude the one(s) you don't want to use per project in the cc_config.xml file in the BOINC Data directory:


<cc_config>
  <options>
    <use_all_gpus>1</use_all_gpus>
    <exclude_gpu>
      <url>http://www.worldcommunitygrid.org/</url>
      <device_num>0</device_num>
      <type>NVIDIA</type>
    </exclude_gpu>
  </options>
</cc_config>

for example will exclude NVIDIA device 0 for World Community Grid.

 

Note you may have to re-read the config and/or Local Pref files from the Options menu in BOINC Manager and/or restart the BOINC client for the setting to take effect.

Okay, thanks! I'll give it a shot next weekend.

Desktop build : Ryzen 5 3600 (O/C to 4Ghz all-core) | Gigabyte B450M-DS3H | 16GB DDR4-2400 Crucial(O/C to 2667) | GALAX RTX 2060 6GB | Zotac GTX 1060 Mini 3GB (O/C) | CoolerMaster MWE 650 Gold

                        

Laptop : ASUS ROG Strix G17 : i7-10750H, 16GB RAM, Geforce 1660Ti 6GB(80W), 512GB SSD

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×