Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited



About Gorgon

  • Title

Profile Information

  • Location
    Great White North
  • Occupation


  • CPU
    Intel Xeon e3 1231v3 3.4GHz
  • Motherboard
    SuperMicro x10SL7-F
  • RAM
    32 GB Crucial DDR3 ECC
  • GPU
    AMD FirePro W4100
  • Case
    Fractal Design Node 804
  • Storage
    2x250GB EVO 960 RAID 1, 4 3TB Seagate NAS RAID10
  • PSU
    Corsair TX750M
  • Display(s)
    3 Asus ProArt 24”
  • Cooling
  • Keyboard
    Corsair Gaming
  • Mouse
    Wacom Tablet
  • Sound
    Creative USB
  • Operating System
    Windows 10 Pro
  • PCPartPicker URL

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I suspect after 3 or 5 of these the cost would be cheaper just getting a used dual socket Xeon system and running that. There are quite a few people here with experience buying and running these who could likely help. So instead of multiple little boxes with few Threads per CPU a single system with a Couple of CPUs with 20-30 threads each. Or if you really want new hardware buy AMD Ryzen 7 2700 or 2700x CPUs as they are really discounted now and b450 motherboards and build a few discrete systems.
  2. You are correct in hat it can’t run multiple projects simultaneously on a thread or a GPU and thread combined but it definitely can and does run multiple projects on the same system. Im running WCG, Einstein and Folding at Home on some Systems. The Project Weight or Resource Share in BOINC sets the sharing between systems and is a number between 0 and 100. These are cumulative so if E@H and WCG are both set to 100 and both are vying for the same resource then both should get 1/2 the resources. For F@H and BOINC I use the use at most X% of CPUs in BOINC to reserve threads exclusively for F@H and pause GPUs in F@H I want to make a available for E@H and edit the cc_config.xml file for BOINC to select or ignore a GPU as required and enable all GPUs depending on how I need to assign resources. Daily Driver: 6/8 threads on WCG kvm1: 16 threads WCG fold 6: 6 GPU & 6 threads for F@H; 10 threads for WCG fold 7: 2 GPU & 2 threads for E@H; 14 threads for WCG fold8: 1 GPU & 1 thread for F@H; 1 GPU & 1 thread for E@H; 2 threads WCG fold9: 1 GPU & 1 thread for F@H; 1 GPU & 1 thread for E@H; 2 threads WCG
  3. All energy ultimately ends up as heat so yes, you are saving on the heating bill but Natural Gas is less expensive per BTU than electricity for heating so there are some trade offs. It would be nice if there was a better way of redistributing the wast heat from Distributed computing through the house better. 2kW of space heating in the basement doesn’t do the rest of the house that much good when it’s 26C in the basement and the first floor thermostat reads 21.5C
  4. Just got a $400 power bill which is about $200 higher than normal so I’ve setup a cron job to finish folding 1 1/2 hours before on-peak starts and then unpause at the start of mid-peak and finish again 2 1/2 hours before the second on-peak period starts and lastly to unpause at the start of off-peak. Being in a Northern Climate the rates change in October to: 07:00-11:00 on-peak $0.208/kWh 11:00-17:00 mid-peak $0.144/kWh 17:00-19:00 on-peak $0.208/kWh 19:00-07:00 off-peak $0.101/kWh So my crontab looks like: 30 05 * * 1-5 /usr/bin/FAHClient --send-finish 00 11 * * 1-5 /usr/bin/FAHClient --send-unpause 30 14 * * 1-5 /usr/bin/FAHClient --send-finish 00 19 * * 1-5 /usr/bin/FAHClient --send-unpause
  5. Not a problem, I just had to chuckle as WCG goes so slow that it takes forever to make progress. Must resist picking up a 2920x and an x399 ... Im currently running WCG on: 6/8 threads on my e3-1231v3 (2 threads so Windows doesn’t completely suck) 2/4 threads on Pentium Gold 5400 and 5500s (2 threads each for F@H GPUs) 16/16 threads on a 2700x 10/16 threads on a 2700 (6 threads for F@H GPUs) 14/16 threads on another 2700 (2 threads for F@H GPUs) I do have remote access enabled and use the BOINC manager from my daily driver to control 5 systems but I don’t use the scheduler in BM as it would stop the CPU folding, which is trivial in power consumption compared to the 4 GPUs I’m running on Gravitational Wave searches on Einstein, so I just have a cron job running that suspends Einstein during peak hours and resumes it for mid peak and off peak.
  6. Yeah, thanks - I’d just finally hit top 10 and you overtook me Just got my last power bill so I’m throttling back during peak hours by running “boinccmd—project <URL> suspend” in cron
  7. Even in Windows Precision X-1 only seems to be able to control the LEDs on the first 2 GPUs installed and the 2080 Super is the 6th so I’ll either have to shutdown the system, rearrange the cards, boot into windows, change the LEDs then reverse or just put up with it. Ill likely just leave it as the Hybrid cards are not really that much quieter than dual fan cards in the mining frame. With about 4” space between cards they’re not choked at all for airflow and unlike in a regular case they don’t need lots of fans to get adequate heat transfer. The Hybrid cards make more sense in regular cases where you can run the case Fans slower with them installed. Granted you get a 7-10C temperature drop with them which should equate to 2 or 3 extra bins of frequency but that’s still only a 2-3% performance increase so the benefit is purely better removal of heat.
  8. The results are in. Going from x8 to x4 under Linux the 2080 Super showed a 2.48% decrease in Production (PPD) while the 2070 Super showed a 0.34% increase in production and both values being within a Standard Deviation. We can say with some confidence: Under Linux a higher-end NVidia GPU would see no significant decrease in Production moving from a PCIe3 x8 to a PCIe3 x4 slot. Comparing Windows to Linux Production on a PCIe3 x4 link we observed: -29.3% RTX 2080 Super -21.6% RTX 2070 Super -18.1% RTX 2070 -23.0% RTX 2060 Super -21.1% RTX 2060 -20.0% GTX 1070 Ti The RTX 2080 Super's Production appears to suffer the most under Windows on an x4 link but this may have been compounded by it being connected to the x570 motherboard slot connected to the ChipSet and hence further limited by the shared PCIe3 x4 link between the Ryzen 7 2700 CPU and the ChipSet. The general observation is: Under Windows a higher-end NVidia GPU on a PCIe3 x4 link will see a 20% decrease in Folding at Home Production in Points per Day (PPD) compared to the same card under Linux.
  9. 1 x RTX 2080 Super XC Hybrid; 4 x RTX 2070 Super XC Hybrids; 1 x RTX 2070 Super XC: Radiators I was always that kid in family photos who was sticking out his tongue or rolling his eyes. Let's call this view Karma: So I'll have to boot into windows and load Precision X1 to change the LEDs from Green to Blue
  10. We should start thinking about starting some planning for the next pentathlon. Getting people trying out and documenting various projects so we can gather some best practices for configuration and tuning and possibly thinking about some method for people to sign up for bunkering groups with a "marshall" to lead and direct strategically when the bunkers for a given project get released. I know I need to bone up on some of the projects that require VirtualBox and other containers to run and using virtualization in general to be more effective so some guides on virtualization would be helpful. I've been mulling over getting a 2920x (or a 2950) and an x399 motherboard while they're still available and discounted as I have a feeling that the newer ThreadRippers are going to be a lot more dear to acquire.
  11. I'm running WCG on all my spare threads and Einstein@Home on a few GPUs all on Ubuntu so I can confirm you can run Folding@Home and BOINC headless on Ubuntu and as that's Debian based you should be able to do the same. I've written a guide for F@H on Ubuntu which details a few of the gotcha's for headless operation (nvidia-control requires a lot of the X libraries to run) I've been running a 6 GPU Folding Rig with BOINC on the spare threads. It's currently running on Windows (which sucks due to poor PCIe lane performance/overhead) but I'll be switching it back to Ubuntu (it is configured for dual-boot for testing) this weekend. I'm using a x570 motherboard but similar principals (m.2 to PCIe3 x4 in x16 adapters connected to bifurcated PCIe x16 slots) could be used on a single x299 or x399 motherboard to run up to 16 GPUs using stacked mining frames.
  12. Congratulations to that crazy cat @Ithanul for reaching 100 Million points on BOINC.
  13. Congratulations! There’s a Badge now for that. Post your stats on the Badge request thread
  14. Windows Performance is still poor but I’m going to continue until the Weekend so I can get a good selection of WU data for comparison. Tuesday the download bug reappeared and messed up those WUs with reboots and having to delete and re add slots. The EVGA Hybrid kit showed up today so I shut the rig down for an hour or so and installed the Hybrid Kit on the RTX 2080 Super XC. I was going to install it on the 2070 Super XC but it should make more of an improvement on the 2080 Super. The installation went smoothly and I used the strips covering the thermal pads on the kit to cover the thermal strips on the mid plate of the cooler that was replaced and then bagged up the cooler and all the extra parts should I want to reinstall the cooler in the future. The 2080 Super XC was running at 64C GPU with stock clocks and with the hybrid kit I’m seeing 57C so a little warmer than the 2070 Super Hybrids I’m running but that to be expected as it’s the same cooling design with more CUDA cores.