Jump to content

LTT Official Folding Month VI

Go to solution Solved by GOTSpectrum,

 

Message added by TVwazhere,

Daily point updates are posted here:

10 hours ago, Lightwreather JfromN said:

Hey @LAR_Systems (sorry for the ping), but I don't appear to turn up in the league? 

Fore reference my folding name is Jeshurun

You are there, you are just new to this months ranks by the looks, takes some time for my systems to parse and post combined with the delay of F@H reporting WUs https://folding.lar.systems/league/team_ranks?id=223518

 

image.thumb.png.68577c07f4166ca7cf74625090dda052.png

Hardware & Programming Enthusiast - Creator of LAR_Systems "Folding@Home in the Dark" browser extension and GPU / CPU PPD Database. 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, dogwitch said:

what type you looking AT....

 

The hardware classification is "more efficient cards I can not afford" lol.

Hardware & Programming Enthusiast - Creator of LAR_Systems "Folding@Home in the Dark" browser extension and GPU / CPU PPD Database. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAR_Systems said:

The hardware classification is "more efficient cards I can not afford" lol.

Me thinks that more people need to buy you a coffee!  
 

<link removed>

Edited by SansVarnic
Removed content.

Not much of a Gamer….. But I have thing about F@H that may be a little over the top.   See my builds here

Link to comment
Share on other sites

Link to post
Share on other sites

I didn't realize folding month was happening until a recent video mentioned it. I assume the signup deadline is a hard limit? I've been folding for the team for a while and only realized yesterday this was a thing that LTT did. Thanks in advance regardless of if it's too late or not.

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, skybax said:

I didn't realize folding month was happening until a recent video mentioned it. I assume the signup deadline is a hard limit? I've been folding for the team for a while and only realized yesterday this was a thing that LTT did. Thanks in advance regardless of if it's too late or not.

Yeah, sorry mate

Even I notice this event thanks to last week's WAN show (Thank God I watched when it goes live)

 

Good luck on the next event

My System: Ryzen 7800X3D // Gigabyte B650 AORUS ELITE AX // 32GB 6000MHz DDR5 Silicon Power Zenith CL30 // Sapphire Pulse AMD Radeon RX 7900 XT OC with mod heatsink on the metal plate  // Phanteks P300A  // Gigabyte Aorus GEN4 7300 PCIE 4.0 NVME // Kingston NV2 Gen4 PCIE 4.0 NVME // 

Seasonic Focus GX-850 Fully Modular // Thermalright Frost Spirit 140 Black V3 // Phanteks M25 140mm // Display: Bezel 32MD845 V2 QHD // Keychron K8 Pro (Mod: Gateron black box ink; Tape mode on PCB and Keycaps) // Razer Cobra Wired Mouse // Audio Technica M50X Headphone // Sennheiser HD 650 // Genius SP-HF180 USB Speaker //

 

And Laptop Acer Nitro 5 AN515-45 for mobility

Phone:

iPhone 11 (with battery replaced instead of buying new phone for long term and not submitting (fully) to Apple Lord

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, ImWilly said:

Yeah, sorry mate

Even I notice this event thanks to last week's WAN show (Thank God I watched when it goes live)

 

Good luck on the next event

Oof. Thanks for letting me know. Gotta keep an eye on it for next time.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, leadeater said:

Sighh, I'm dumb. I forgot I can just NAT the servers that don't have firewall rules through the ones that do. I can get way more servers going now 😁

 

giphy.gif

what back ally stuff. do  i need to do... to get access to it?

MSI x399 sli plus  | AMD theardripper 2990wx all core 3ghz lock |Thermaltake flo ring 360 | EVGA 2080, Zotac 2080 |Gskill Ripjaws 128GB 3000 MHz | Corsair RM1200i |150tb | Asus tuff gaming mid tower| 10gb NIC

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Sighh, I'm dumb. I forgot I can just NAT the servers that don't have firewall rules through the ones that do. I can get way more servers going now 😁

 

giphy.gif

lets get that dedicated F@H cluster up and running!

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, RollinLower said:

lets get that dedicated F@H cluster up and running!

Well I have 8x DL360 Gen9, 2x DL580 Gen9 setup already now (384 cores total)

 

Just installing Ubuntu server on a DL365 Gen10 Plus 2x 7713 right now 

Edited by leadeater
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Well I have 8x DL360 Gen9, 2x DL580 Gen9 setup already now

Nice! All i managed to squeeze out of my employer this year is a 16-core EPYC machine and an older Hypervisor running dual Gold 5210's

I need to convince these guys to invest into GPU accelerated-something so i can hijack those machines next year..

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

@LAR_Systems If I'm only running Ubuntu server I don't have a way to send data to your service right? It's browser extension only?

If you have SSH or can run sshd on an alternate port you can port-forward a browser session from a remote machine and run the extension on that

 

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Gorgon said:

If you have SSH or can run sshd on an alternate port you can port-forward a browser session from a remote machine and run the extension on that

 

 

Quote

 You can only do this for a single system as the LAR Systems Client used the https://client.foldingathome.org URL that re-directs to localhost:7396 (127.0.0.1:7396).

Damn, that's not ideal. Think I'll flag it then

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

image.thumb.png.b09f9dcb9425c293af94d8d919e2ee3c.png

 

Noice best part is that ram usage lol

Burn-In Test! Way better than mprime

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

Have to make sure that 6GB ram out of 2TB is stable 🤣

You could always create a RAM Disk and move the work directory (/var/lib/fahclient/work) to that - just to what, if any, difference that might make

FaH BOINC HfM

Bifrost - 6 GPU Folding Rig  Linux Folding HOWTO Folding Remote Access Folding GPU Profiling ToU Scheduling UPS

Systems:

desktop: Lian-Li O11 Air Mini; Asus ProArt x670 WiFi; Ryzen 9 7950x; EVGA 240 CLC; 4 x 32GB DDR5-5600; 2 x Samsung 980 Pro 500GB PCIe3 NVMe; 2 x 8TB NAS; AMD FirePro W4100; MSI 4070 Ti Super Ventus 2; Corsair SF750

nas1: Fractal Node 804; SuperMicro X10sl7-f; Xeon e3-1231v3; 4 x 8GB DDR3-1666 ECC; 2 x 250GB Samsung EVO Pro SSD; 7 x 4TB Seagate NAS; Corsair HX650i

nas2: Synology DS-123j; 2 x 6TB WD Red Plus NAS

nas3: Synology DS-224+; 2 x 12TB Seagate NAS

dcn01: Fractal Meshify S2; Gigabyte Aorus ax570 Master; Ryzen 9 5900x; Noctua NH-D15; 4 x 16GB DDR4-3200; 512GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750Mx

dcn02: Fractal Meshify S2; Gigabyte ax570 Pro WiFi; Ryzen 9 3950x; Noctua NH-D15; 2 x 16GB DDR4-3200; 128GB NVMe; 2 x Zotac AMP 4070ti; Corsair RM750x

dcn03: Fractal Meshify C; Gigabyte Aorus z370 Gaming 5; i9-9900k; BeQuiet! PureRock 2 Black; 2 x 8GB DDR4-2400; 128GB SATA m.2; MSI 4070 Ti Super Gaming X; MSI 4070 Ti Super Ventus 2; Corsair TX650m

dcn05: Fractal Define S; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SATA NVMe; Gigabyte Gaming RTX 4080 Super; Corsair TX750m

dcn06: Fractal Focus G Mini; Gigabyte Aorus b450m; Ryzen 7 2700; AMD Wraith; 2 x 8GB DDR 4-3200; 128GB SSD; Gigabyte Gaming RTX 4080 Super; Corsair CX650m

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Gorgon said:

You could always create a RAM Disk and move the work directory (/var/lib/fahclient/work) to that - just to what, if any, difference that might make

Really I just want to know what Zen3 can do PPD wise with lots of cores, I'll have to reinstall ESXi on it next week sadly. I'll also try out some smaller single CPU 6426Y servers that just came in, they only have 2 DIMMs so hope that won't effect the PPD much.

 

I'd like to get over 1000 cores going over the weekend to collect a full 24hrs worth just to see how much that is, I'm at 512 right now.

 

Edit:

2x 7713

FAHClient --send-command ppd
08:21:48:Connecting to 127.0.0.1:36330
PyON 1 ppd
2830814.09838
---

Seems to have settled in at 2.8m PPD, hasn't increased much for a little while now. 21875 PPD per core, only need 915 cores to match one 4090.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Dark_Hunter said:

You guys with your baller systems, meanwhile I'm here with a cute RTX 2060 Super and my other cute 1060.

Probably more PPD than my baller systems 🤣

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Probably more PPD than my baller systems 🤣

hey, if we're going for cumulative power draw you're probably on top of the leaderboard!

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×