Jump to content

BOINC Community Board

On 11/25/2019 at 4:08 PM, Windows7ge said:

Oh hey, just broke top 10 in the team for WCG. ?

Small victory. Reaching top 6 shouldn't be hard but then it's just a wall.

Yeah, thanks - I’d just finally hit top 10 and you overtook me ?

 

Just got my last power bill so I’m throttling back during peak hours by running “boinccmd—project <URL> suspend” in cron 

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Gorgon said:

Yeah, thanks - I’d just finally hit top 10 and you overtook me ?

 

Just got my last power bill so I’m throttling back during peak hours by running “boinccmd—project <URL> suspend” in cron 

Sorry, but hey there's enough room for both of us up here. I'm moving up to 9th tomorrow so you can have 10th back when you hit 6,037,836 :D

 

I used to live where I do for free but my landlord decided to start charging rent. Now I use it as my excuse to cover the electrical cost. If you're solely running the boinc-client like I am on my rigs why don't you enable remote access on them? Then you can link the boinc-manager over the network and use it's scheduler for when to run projects.

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Windows7ge said:

Sorry, but hey there's enough room for both of us up here. I'm moving up to 9th tomorrow so you can have 10th back when you hit 6,037,836 :D

 

I used to live where I do for free but my landlord decided to start charging rent. Now I use it as my excuse to cover the electrical cost. If you're solely running the boinc-client like I am on my rigs why don't you enable remote access on them? Then you can link the boinc-manager over the network and use it's scheduler for when to run projects.

Not a problem, I just had to chuckle as WCG goes so slow that it takes forever to make progress. Must resist picking up a 2920x and an x399 ...

 

Im currently running WCG on:

  • 6/8 threads on my e3-1231v3 (2 threads so Windows doesn’t completely suck)
  • 2/4 threads on Pentium Gold 5400 and 5500s (2 threads each for F@H GPUs)
  • 16/16 threads on a 2700x
  • 10/16 threads on a 2700 (6 threads for F@H GPUs)
  • 14/16 threads on another 2700 (2 threads for F@H GPUs)

I do have remote access enabled and use the BOINC manager from my daily driver to control 5 systems but I don’t use the scheduler in BM as it would stop the CPU folding, which is trivial in power consumption compared to the 4 GPUs I’m running on Gravitational Wave searches on Einstein, so I just have a cron job running that suspends Einstein during peak hours and resumes it for mid peak and off peak.

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Gorgon said:

Not a problem, I just had to chuckle as WCG goes so slow that it takes forever to make progress. Must resist picking up a 2920x and an x399 ...

 

Im currently running WCG on:

  • 6/8 threads on my e3-1231v3 (2 threads so Windows doesn’t completely suck)
  • 2/4 threads on Pentium Gold 5400 and 5500s (2 threads each for F@H GPUs)
  • 16/16 threads on a 2700x
  • 10/16 threads on a 2700 (6 threads for F@H GPUs)
  • 14/16 threads on another 2700 (2 threads for F@H GPUs)

I do have remote access enabled and use the BOINC manager from my daily driver to control 5 systems but I don’t use the scheduler in BM as it would stop the CPU folding, which is trivial in power consumption compared to the 4 GPUs I’m running on Gravitational Wave searches on Einstein, so I just have a cron job running that suspends Einstein during peak hours and resumes it for mid peak and off peak.

I just made some room for you:

1298456869_Screenshotfrom2019-11-2922-08-10.thumb.png.59bebbb3e4fffc3a39fa77e936c5bf31.png

~6 days you'll be back in top 10 :P.

 

By it's nature CPU is a lot harder than GPU but what you have to understand is that it's just as difficult for everyone else so the only reason it's so slow to progress is because you're already so close to the top. It seems WCG hasn't had a GPU compatible project in quite some time which is how I think alex_kramer hit 32M. He probably got in when there were a few.

 

What I'm currently running WCG on:

  • 2x E5-2698v3's. I believe 54/64 threads. (I don't want Windows capped at 100% because it'll start to choke itself.)
  • 2x E5-2670's. 24/32 threads. (I need some of the CPU available for what I have the server doing.)
  • i7-5960X 4.5GHz 14/16 threads.
  • i7-3930K 10/12 threads.
  • TR-1950X 24/32 threads. (This one is pulling double-duty running WCG & Einstein@home simultaneously.)

If that's what works for your setup then aright.

Link to comment
Share on other sites

Link to post
Share on other sites

For the people running Einstein@Home depending on what your motive is weather it is to further science or just to make your BOINCstats points go up as fast as possible (maybe both). You may want to reconfigure your project preferences.

 

As of recent Einstein@home has been handing out a lot of the following job:

  • Gravitational Wave search O2 Multi-Directional GPU

These jobs take very nearly 4 times longer than their "Gamma-ray pulsar binary search #1 (GPU)" tasks and hand out very nearly 1/4th as many points. You end up with approx 1/8th as much credit per hour of run time (for me that's about 100,000 points/day instead of 800,000/day~1,000,000/day).

 

So if you're like me and you want to help science but don't want to feel like a Bitcoin miner where the network difficulty just jumped up ~x8 you may want to disable those. There's still plenty of the former jobs available.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/30/2019 at 6:59 PM, Windows7ge said:

For the people running Einstein@Home depending on what your motive is weather it is to further science or just to make your BOINCstats points go up as fast as possible (maybe both). You may want to reconfigure your project preferences.

 

As of recent Einstein@home has been handing out a lot of the following job:

  • Gravitational Wave search O2 Multi-Directional GPU

These jobs take very nearly 4 times longer than their "Gamma-ray pulsar binary search #1 (GPU)" tasks and hand out very nearly 1/4th as many points. You end up with approx 1/8th as much credit per hour of run time (for me that's about 100,000 points/day instead of 800,000/day~1,000,000/day).

 

So if you're like me and you want to help science but don't want to feel like a Bitcoin miner where the network difficulty just jumped up ~x8 you may want to disable those. There's still plenty of the former jobs available.

well dang, that explains quite a lot actually. was wondering why my production took a hit!

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/3/2019 at 10:31 AM, RollinLower said:

somehow my machines (both my server and main rig) can't recieve WU's from rosetta@home. they keep crunching colatz and einstein no problem tough. what gives?

The way I understand the BOINC software you can only run one project at a time. I believe it does allow you to set it to switch between projects on a time basis but you can't run two different projects at once (which is why I'd like to explore virtualization).

 

Check your logs. If it says it tried to fetch work but there's no work available then I believe it's out of your hands (unless your preferences on the site say not to give you work for the jobs they have available).

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Windows7ge said:

The way I understand the BOINC software you can only run one project at a time. I believe it does allow you to set it to switch between projects on a time basis but you can't run two different projects at once (which is why I'd like to explore virtualization).

 

Check your logs. If it says it tried to fetch work but there's no work available then I believe it's out of your hands (unless your preferences on the site say not to give you work for the jobs they have available).

You are correct in hat it can’t run multiple projects simultaneously on a thread or a GPU and thread combined but it definitely can and does run multiple projects on the same system.

 

Im running WCG, Einstein and Folding at Home on some Systems.

 

The Project Weight or Resource Share in BOINC sets the sharing between systems and is a number between 0 and 100. These are cumulative so if E@H and WCG are both set to 100 and both are vying for the same resource then both should get 1/2 the resources.

 

For F@H and BOINC I use the use at most X% of CPUs in BOINC to reserve threads exclusively for F@H and pause GPUs in F@H I want to make a available for E@H and edit the cc_config.xml file for BOINC to select or ignore a GPU as required and enable all GPUs depending on how I need to assign resources.

 

Daily Driver: 6/8 threads on WCG

kvm1: 16 threads WCG

fold 6: 6 GPU & 6 threads for F@H; 10 threads for WCG

fold 7: 2 GPU & 2 threads for E@H; 14 threads for WCG

fold8: 1 GPU & 1 thread for F@H; 1 GPU & 1 thread for E@H; 2 threads WCG

fold9: 1 GPU & 1 thread for F@H; 1 GPU & 1 thread for E@H; 2 threads WCG

 

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Gorgon said:

You are correct in hat it can’t run multiple projects simultaneously on a thread or a GPU and thread combined but it definitely can and does run multiple projects on the same system.

 

Im running WCG, Einstein and Folding at Home on some Systems.

 

The Project Weight or Resource Share in BOINC sets the sharing between systems and is a number between 0 and 100. These are cumulative so if E@H and WCG are both set to 100 and both are vying for the same resource then both should get 1/2 the resources.

 

For F@H and BOINC I use the use at most X% of CPUs in BOINC to reserve threads exclusively for F@H and pause GPUs in F@H I want to make a available for E@H and edit the cc_config.xml file for BOINC to select or ignore a GPU as required and enable all GPUs depending on how I need to assign resources.

 

Daily Driver: 6/8 threads on WCG

kvm1: 16 threads WCG

fold 6: 6 GPU & 6 threads for F@H; 10 threads for WCG

fold 7: 2 GPU & 2 threads for E@H; 14 threads for WCG

fold8: 1 GPU & 1 thread for F@H; 1 GPU & 1 thread for E@H; 2 threads WCG

fold9: 1 GPU & 1 thread for F@H; 1 GPU & 1 thread for E@H; 2 threads WCG

 

Hypothetical scenario, let's say I had 4 GPUs and I signed up for four projects setting the distribution to 25% for each. Would each GPU pick up a task for each project or would all the GPUs run a single project for a set period of time then switch to another project, and so on and so forth...

Link to comment
Share on other sites

Link to post
Share on other sites

Welp, if anyone wants to gun for my position on WCG (@Gorgon being the most likely candidate) put it into over drive as my productivity is now down ~32,000/credit per day (I'm telling you before BOINCstats even does) as my largest server is now offline and will remain offline until further notice (I'm going to guesstimate 2 weeks) for reasons I will post about in a Status Update.

 

So while I'm down, GO GO GO! :D

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Windows7ge said:

Hypothetical scenario, let's say I had 4 GPUs and I signed up for four projects setting the distribution to 25% for each. Would each GPU pick up a task for each project or would all the GPUs run a single project for a set period of time then switch to another project, and so on and so forth...

I think that each GPU would get one task. Distribution is used when you have multiple projects on one system.

Favebook's F@H Stats

Favebook's BOINC Stats

 

CPU i7-8700k (5.0GHz)  Motherboard Aorus Z370 Gaming 7  RAM Vengeance® RGB Pro 16GB DDR4 3200MHz  GPU  Aorus 1080 Ti

Case Carbide Series SPEC-OMEGA  Storage  Samsung Evo 970 1TB & WD Red Pro 10TB

PSU Corsair HX850i  Cooling Custom EKWB loop

 

Display Acer Predator x34 120Hz

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Favebook said:

I think that each GPU would get one task. Distribution is used when you have multiple projects on one system.

That's what I'm trying to figure out. Yes each GPU would pick up a task but would they pick up tasks from different projects at once or would it only run one project at a time switching between projects periodically?

 

I'm basically trying to justify playing with EPYC, Virtualization, & GPU pass-though but if there isn't any real benefit to segregating the projects & GPUs then that kind of ruins the plan I had going...

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Windows7ge said:

That's what I'm trying to figure out. Yes each GPU would pick up a task but would they pick up tasks from different projects at once or would it only run one project at a time switching between projects periodically?

 

I'm basically trying to justify playing with EPYC, Virtualization, & GPU pass-though but if there isn't any real benefit to segregating the projects & GPUs then that kind of ruins the plan I had going...

Only way I know of is running separate clients on the same OS and have each set to only work on one GPU as you can set the cc_config.xml to restrict what GPUs each client runs on.

 

Though, there is probably a method using config files to get a single client to do the trick.

2023 BOINC Pentathlon Event

F@H & BOINC Installation on Linux Guide

My CPU Army: 5800X, E5-2670V3, 1950X, 5960X J Batch, 10750H *lappy

My GPU Army:3080Ti, 960 FTW @ 1551MHz, RTX 2070 Max-Q *lappy

My Console Brigade: Gamecube, Wii, Wii U, Switch, PS2 Fatty, Xbox One S, Xbox One X

My Tablet Squad: iPad Air 5th Gen, Samsung Tab S, Nexus 7 (1st gen)

3D Printer Unit: Prusa MK3S, Prusa Mini, EPAX E10

VR Headset: Quest 2

 

Hardware lost to Kevdog's Law of Folding

OG Titan, 5960X, ThermalTake BlackWidow 850 Watt PSU

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Ithanul said:

Only way I know of is running separate clients on the same OS and have each set to only work on one GPU as you can set the cc_config.xml to restrict what GPUs each client runs on.

 

Though, there is probably a method using config files to get a single client to do the trick.

I don't know how happy the OS would be running 4, 5, 6+ instances of the BOINC client. Not to mention the BOINC manager would be confused connecting to localhost unless each boinc-client instance was accompanied by a unique port number.

 

Configured normally I'm going to guess each project would just fill the queue based on the % you configured (how much based on the work you request) and it'll just run every project at once.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/4/2019 at 10:27 PM, Windows7ge said:

Hypothetical scenario, let's say I had 4 GPUs and I signed up for four projects setting the distribution to 25% for each. Would each GPU pick up a task for each project or would all the GPUs run a single project for a set period of time then switch to another project, and so on and so forth...

The weights are summed so assuming all projects have a weight of 100 then all 4 projects would run at the same time on separate GPUs.

 

If you were running 3 projects and wanted to assign 2 GPUs to one project then you’d either set the weights for the other two projects to 50 or set the preferred project to 200.

 

Of course things get a little confusing when your mixing CPU only projects with GPU projects.

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/5/2019 at 10:40 AM, Windows7ge said:

That's what I'm trying to figure out. Yes each GPU would pick up a task but would they pick up tasks from different projects at once or would it only run one project at a time switching between projects periodically?

 

I'm basically trying to justify playing with EPYC, Virtualization, & GPU pass-though but if there isn't any real benefit to segregating the projects & GPUs then that kind of ruins the plan I had going...

Each GPU would pick up a task from separate projects and continue to do so.

 

There’s likely use cases where virtualization and pass-through would be easier to control jobs with but it’s likely a lot of work to get going.

 

It’s almost Christmas so if you’ve been more nice then naughty then maybe Santa will make it an Epyc one ?

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Gorgon said:

The weights are summed so assuming all projects have a weight of 100 then all 4 projects would run at the same time on separate GPUs.

 

If you were running 3 projects and wanted to assign 2 GPUs to one project then you’d either set the weights for the other two projects to 50 or set the preferred project to 200.

 

Of course things get a little confusing when your mixing CPU only projects with GPU projects.

So that's how that works. Up until now I thought it only let you run one project at a time.

1 hour ago, Gorgon said:

Each GPU would pick up a task from separate projects and continue to do so.

 

There’s likely use cases where virtualization and pass-through would be easier to control jobs with but it’s likely a lot of work to get going.

So there's no immediately apparent benefit to virtualizing the process. Really just a 1-3% performance reduction with no return. :/

 

Surprisingly not actually.

  1. Install Debian
  2. Install virt-manager (or Cockpit) & ovmf (QEMU+UEFI support for VMs)
  3. Enable IOMMU groups on the south bridge (AMD at least in the case of TR) or VT-d (Intel)
  4. Enable IOMMU in Grub.
  5. Disable the GPU driver from loading on system startup. Can be done on a per device basis or just blacklist the driver system-wide.
  6. Pass-though the GPU and everything else in the IOMMU group to the VM.
  7. Run lstopo and edit the VMs .XML file to pin the vCPU cores to physical cores. This helps performance quite a bit especially when you have more than one NUMA node like on TR or a multi-socket motherboard.
  8. Enable huge pages.

From there you're pretty much done. You can setup the VM with drivers (OpenCL/OpenGL/CUDA), install BOINC then duplicate the VMs virtual disk and connect as many instances as physical GPUs you have to give them and like that you're done. Assuming nothing goes wrong (something always goes wrong) I could do this in a day or less.

 

It's on my to-do list to write a guide on how to setup low overhead/high performance VMs on Debian. Gaming in a VM is extremely doable with the right hypervisor & configuration.

 

1 hour ago, Gorgon said:

It’s almost Christmas so if you’ve been more nice then naughty then maybe Santa will make it an Epyc one ?

Yeah, that would go over well.

 

"Dear Santa, I've been good (mostly) all year and I just want one thing. To build a crazy VM server. So could you maybe drop a EPYC 7601P, 128GB of DDR4 RAM, a SP3 motherboard with 8 PCIe slots, a pair of Corsair 1000Ws, and eight R9 series high-end AMD GPUs in my stocking? That's all I ask.

 

p.s. And a NVMe 2TB SSD.

p.s.s And a 225V/15A circuit breaker. 115V/10A isn't going to cut it.

 

Thanks, Windows7ge"

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/5/2019 at 8:13 PM, Windows7ge said:

I don't know how happy the OS would be running 4, 5, 6+ instances of the BOINC client. Not to mention the BOINC manager would be confused connecting to localhost unless each boinc-client instance was accompanied by a unique port number.

I done mutli BOINC clients in previous Pent events.  Windows takes a bit more for setup and Linux is fairly straight forward if one knows basic terminal knowledge.  And yes, you set each client to use a different port which you than aim the manager towards.

 

I mostly only did such to do bunkering as some projects can have restrictions on how many WUs you can download at a time.

2023 BOINC Pentathlon Event

F@H & BOINC Installation on Linux Guide

My CPU Army: 5800X, E5-2670V3, 1950X, 5960X J Batch, 10750H *lappy

My GPU Army:3080Ti, 960 FTW @ 1551MHz, RTX 2070 Max-Q *lappy

My Console Brigade: Gamecube, Wii, Wii U, Switch, PS2 Fatty, Xbox One S, Xbox One X

My Tablet Squad: iPad Air 5th Gen, Samsung Tab S, Nexus 7 (1st gen)

3D Printer Unit: Prusa MK3S, Prusa Mini, EPAX E10

VR Headset: Quest 2

 

Hardware lost to Kevdog's Law of Folding

OG Titan, 5960X, ThermalTake BlackWidow 850 Watt PSU

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Ithanul said:

I done mutli BOINC clients in previous Pent events.  Windows takes a bit more for setup and Linux is fairly straight forward if one knows basic terminal knowledge.  And yes, you set each client to use a different port which you than aim the manager towards.

 

I mostly only did such to do bunkering as some projects can have restrictions on how many WUs you can download at a time.

This is part in where I had the idea of virtualization for bunkering purposes since you could just save an instance of the vDisk then create and run as many VMs as time you have available to bunker. Unfortunately Oracle VM Virtualbox has way too much overhead. Windows in general has way too much overhead and not everyone would be comfortable using Debian+QEMU and optimizing the VMs for performance.

 

So that may be one benefit to virtualizing BOINC. Bunkering would be much easier/efficient. Overhead for being a VM should be minimal at least with debian+QEMU as the hypervisor. It'd give you a full additional layer of control over sharing resources to projects. Add a GPU, switch a GPU, add CPU's, etc.

 

I'll probably just go ahead and build it anyways. Maybe I'll come up with another purpose for it some day that can seriously leverage the VM aspect of it.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Windows7ge said:

This is part in where I had the idea of virtualization for bunkering purposes since you could just save an instance of the vDisk then create and run as many VMs as time you have available to bunker. Unfortunately Oracle VM Virtualbox has way too much overhead. Windows in general has way too much overhead and not everyone would be comfortable using Debian+QEMU and optimizing the VMs for performance.

 

So that may be one benefit to virtualizing BOINC. Bunkering would be much easier/efficient. Overhead for being a VM should be minimal at least with debian+QEMU as the hypervisor. It'd give you a full additional layer of control over sharing resources to projects. Add a GPU, switch a GPU, add CPU's, etc.

 

I'll probably just go ahead and build it anyways. Maybe I'll come up with another purpose for it some day that can seriously leverage the VM aspect of it.

I still be interested in reading the write up as my experience with doing passthrough is minimal.  Along with I have yet to touch QEMU or do much VMing in Linux.  The bit of my experience is VMWare floating Windows Servers (and good chunk of that is already scripted for our shop, so I'm not heavily versed in building from bottom to top VMing).

 

Hmmm, though, what about Docker?  Or, is the overhead in using that not make it worth the effort?

2023 BOINC Pentathlon Event

F@H & BOINC Installation on Linux Guide

My CPU Army: 5800X, E5-2670V3, 1950X, 5960X J Batch, 10750H *lappy

My GPU Army:3080Ti, 960 FTW @ 1551MHz, RTX 2070 Max-Q *lappy

My Console Brigade: Gamecube, Wii, Wii U, Switch, PS2 Fatty, Xbox One S, Xbox One X

My Tablet Squad: iPad Air 5th Gen, Samsung Tab S, Nexus 7 (1st gen)

3D Printer Unit: Prusa MK3S, Prusa Mini, EPAX E10

VR Headset: Quest 2

 

Hardware lost to Kevdog's Law of Folding

OG Titan, 5960X, ThermalTake BlackWidow 850 Watt PSU

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Ithanul said:

I still be interested in reading the write up as my experience with doing passthrough is minimal.  Along with I have yet to touch QEMU or do much VMing in Linux.  The bit of my experience is VMWare floating Windows Servers (and good chunk of that is already scripted for our shop, so I'm not heavily versed in building from bottom to top VMing).

 

Hmmm, though, what about Docker?  Or, is the overhead in using that not make it worth the effort?

Once I'm done with the Rudimentary DAS Guide QEMU+GPU Pass-though is next.

 

I've only played with docker for a project called STORJ where you rent your HDDs to people. I've not tried using it for any other purposes. If I'm not using a full VM I use a process called containerization (at least on PROXMOX) where an image of a guest OS runs directly on the host kernel and has access to hardware resources (like HDDs, Network, GPU, & expansion cards). This allows for what is essentially bare metal operation for a guest OS as it's running off the host kernel. You could set up several of those and let them bunker with virtually no penalty.

 

Only downside is they're CLI only. You gotta know your commands.

Link to comment
Share on other sites

Link to post
Share on other sites

Unrelated to all that if anybody wants to know what the WCG African Rainfall Project Badge looks like: africanRainfallProject.jpg.dc49908ee456318161f3a507bc102ddd.jpg

It's a little cloud. How cute.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Windows7ge said:

Unrelated to all that if anybody wants to know what the WCG African Rainfall Project Badge looks like: africanRainfallProject.jpg.dc49908ee456318161f3a507bc102ddd.jpg

It's a little cloud. How cute.

I just wish I could get some.  For some reason I have yet to get any.

 

O well, in mean time, I should be hitting soon a total of 10mil in Universe@Home. 

2023 BOINC Pentathlon Event

F@H & BOINC Installation on Linux Guide

My CPU Army: 5800X, E5-2670V3, 1950X, 5960X J Batch, 10750H *lappy

My GPU Army:3080Ti, 960 FTW @ 1551MHz, RTX 2070 Max-Q *lappy

My Console Brigade: Gamecube, Wii, Wii U, Switch, PS2 Fatty, Xbox One S, Xbox One X

My Tablet Squad: iPad Air 5th Gen, Samsung Tab S, Nexus 7 (1st gen)

3D Printer Unit: Prusa MK3S, Prusa Mini, EPAX E10

VR Headset: Quest 2

 

Hardware lost to Kevdog's Law of Folding

OG Titan, 5960X, ThermalTake BlackWidow 850 Watt PSU

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Ithanul said:

I just wish I could get some.  For some reason I have yet to get any.

 

O well, in mean time, I should be hitting soon a total of 10mil in Universe@Home. 

My computers have only completed 19 ARP jobs and I picked up the project as soon as it became available. One of my servers is reporting that a job is going to take over 24hrs to complete. These things are massive.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×