Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Gorgon

Member
  • Content Count

    2,250
  • Joined

  • Last visited

Everything posted by Gorgon

  1. Yup, waiting to see Compute benchmarks and real folding performance data on the 3070 and big Navi then I’ll likely pick up a couple to a few more cards. My guess would be 3 3070s but I Might go for 2 3080s ...
  2. Wasn’t much of a battle. One of my 4th g grandfathers was a private in the unincorporated York Militia under John Beverly Robinson and Stephen Jarvis and was captured and paroled that day. He owned a 5 acre lot on the outskirts of town. The SW corner of King and Yonge. The family kept the property and split the rents until the 1930s and then it was sold after a protracted Legal battle over who got what portion of the revenue.
  3. Yes, if you want to kill it to say restart It due to a stuck slot I usually FAHClient --send-pause ps -ef | grep fahclient kill -KILL <child PID> kill -KILL <parent PID> Kind of the root equivalent of hitting it in the head with a hammer. Ubuntu 20.04 LTS isn’t quite ready for prime tIme yet WRT folding. Mostly due to the age of the client and partially due to the usual fsckery Canonical does. Oh, and the init script really needs a rewrite.
  4. The stats are from a Zabbix vm on my ESXi server. A lot of time spent setting it up and creating scripts to grab the relevant data. One of my 2070s is underperforming and getting around 2.5MPPD but I suspect that it’s a power supply issue so I’m not going to dig into it until after Folding Month. Im also on Linux which might help still even with the new CUDA core
  5. I'm getting an average over 3 days of 2.85, 2.74, 2.73, 2.65 & 2.38MPPD on my 2070S at 80% under Linux (These are Hybrid Cards 'tho) The rest are yielding: 2070a 2.31MPPD 2060s 2.28MPPD 2060 1.55MPPD 1070t 1.27MPPD 1660t 1.17MPPD 1060 6GB 705.7kPPD 1060 6GB 723.8kPPD
  6. I’m managing 2kW quite well with the back basement window open all the way and a 12” fan in it and The 24” box fan I hung from the ceiling las folding year. Luckily it’s a cold spell. I expect we’ll have the usual Indian summer in a couple of weeks then I will roast as I’ve my 4 laptops from work back there as well. I’ve gained about 20lbs during COVID so working a week in a sauna Might not be such a bad thing.
  7. Yes, but you will need to port-forward the remote system on the router at the remote location and you will want to limit which external IP address can connect to it. See the Remote Access link in my sig
  8. “... The same effect can be had ...” I can confirm that using a positive clock offset at a reduced power limit will increase the effective performance (PPD) over a GPU with just the same power limit applied.
  9. Linux users have been unable to under-volt NVidia GPUs since the Kepler days when NVidia removed that facility from the System Management Interface (SMI). In essence, under-volting allows you to run a higher clock frequency at a lower GPU core voltage and thus at a lower heat load making GPUs more efficient. The same effect can be had by lowering the power limit and adding a positive Clock Offset. The issue with this approach is when a Work Unit (WU) checkpoints, or something else occurs which reduces the GPU load for a longish interval, when returning to activity the offset tends to apply to an already fairly high clock frequency typically causing the WU to fail. However, there is an often over-looked switch for the NVidia SMI that can be used to prevent this from happening. -lgc --lock-gpu-clocks= Specifies <minGpuClock,maxGpuClock> clocks as a pair (e.g. 1500,1500) that defines the range of desired locked GPU clock speed in MHz. Setting this will supercede application clocks and take effect regardless if an app is running. Input can also be a singular desired clock value (e.g. <GpuClockValue>). So one could do something like: nvidia-smi -pm 1 nvidia-smi -i 0 -pl 160 nvidia-smi -i 0 -lgc 1830,1980 DISPLAY=:0 XAUTHORITY=/run/user/121/gdm/Xauthority nvidia-settings \ -a [gpu:0]/GPUFanControlState=1 \ -a [fan:0]/GPUTargetFanSpeed=75 \ -a [fan:1]/GPUTargetFanSpeed=75 \ -a [gpu:0]/GPUPowerMizerMode=1 \ -a [gpu:0]/GPUGraphicsClockOffsetAllPerformanceLevels=75 to: Set Persistence Mode for all GPUs (so power limits "stick" across WUs) Set the Power Limit for GPU0 to 160W Set the Minimum GPU Core Clock to 1830MHz and the Maximum to 1980MHz. Enable manual fan control Set both fans for a Turing or later GPU to 75% (Pascal and earlier only have 1 fan control register which controls both fans) Set GPU0 to "Prefer Maximum Performance" Add a +75MHz (5 x 15MHz Turing "bins") GPU Clock offset
  10. I'm waiting for the 3070 results. I'm not, however, waiting for Big Navi as there is still the issue that the OpenCL Compute numbers fall far short of the Theoretical maximum from that which the FP32 TFLOPs number suggest. This is a driver issue and well let's face it AMD does not have a very good record when it comes to fixing driver issues for Gaming and Compute is most likely an even farther priority for them. So even if AMD does come out with a stellar card with lots of potential for compute the whole drivers issue combined with the optimized CUDA core will likely make it a non-starter. The other factor is that these cards are running closer to the edge of the power envelope so they might still be much more efficient at 60-70% of the Power Limit than Turing Cards are. i.e. if you drop their power limit by 20-30% you'd see a smaller decrease in PPD than you might see with Turing.
  11. Phoronix has posted some numbers testing the RTX3080FE on the Phoronix Test Bench. So really nice performance uplift in terms of PPD/$ but Pascal levels of efficiency (PPD/W) rather than Turing. No sign of a review from AnandTech on either the 3090 or 3080
  12. The first step is always admitting you have a problem. One down, 11 to go. Just get a copy of the AA big book, read it and substitute “Folding” for “Alcohol”
  13. Um I thought @Unilevers made soap??? Wait, this is a competition? You mean I shouldn't be folding at a 70% Power Limit and running 60 threads of OpenPandemic on BOINC?
  14. sudo apt install lmsensors sensors-detect (and type "Y" to all the questions) sensorsand hope the Super I/O chip is a NuvoTon 6770 not an ITE like Gigabyte and Some Asus boards use. You should then see something like this for my i9900K it8792-isa-0a60 Adapter: ISA adapter DDR Vtt A/B: +0.58 V (min = +0.55 V, max = +0.80 V) Chipset Core: +1.04 V (min = +0.99 V, max = +1.10 V) CPU Vdd18: +0.94 V (min = +1.74 V, max = +1.85 V) ALARM DDR Vpp A/B: +2.50 V (min = +2.38 V, max = +2.63 V) 3VSB: +3.36 V (min = +3.21 V, max = +3.40 V) Vbat: +3.27 V SYS5 fan/pump: 0 RPM (min = 300 RPM) ALARM SYS4 fan: 1442 RPM (min = 300 RPM) PCIe_x8: +47.0°C (low = +127.0°C, high = +127.0°C) sensor = thermistor EC_temp2: +27.0°C (low = +127.0°C, high = +127.0°C) sensor = thermistor Chassis: +42.0°C (low = +127.0°C, high = +127.0°C) sensor = thermistor coretemp-isa-0000 Adapter: ISA adapter Package id 0: +64.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +64.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +55.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +61.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +56.0°C (high = +86.0°C, crit = +100.0°C) Core 4: +59.0°C (high = +86.0°C, crit = +100.0°C) Core 5: +57.0°C (high = +86.0°C, crit = +100.0°C) Core 6: +58.0°C (high = +86.0°C, crit = +100.0°C) Core 7: +61.0°C (high = +86.0°C, crit = +100.0°C) it8686-isa-0a40 Adapter: ISA adapter CPU Vcore: +1.25 V (min = +0.35 V, max = +1.45 V) +3.3V: +3.29 V (min = +3.21 V, max = +3.41 V) +12V: +11.74 V (min = +11.66 V, max = +12.38 V) +5V: +5.01 V (min = +4.86 V, max = +5.16 V) Vcore SOC: +0.00 V (min = +0.90 V, max = +1.26 V) CPU Vddp: +1.03 V (min = +0.85 V, max = +0.95 V) DRAM A/B: +1.18 V (min = +1.10 V, max = +1.60 V) CPU fan: 1607 RPM (min = 300 RPM) SYS1 fan: 0 RPM (min = 300 RPM) SYS2 fan: 0 RPM (min = 300 RPM) SYS3 fan: 1451 RPM (min = 300 RPM) CPUOPT fan: 1412 RPM (min = 300 RPM) PCIe_x4: +44.0°C (low = +127.0°C, high = +127.0°C) sensor = thermistor Chipset: +48.0°C (low = +127.0°C, high = +127.0°C) sensor = thermistor CPU: +63.0°C (low = +137.0°C, high = +137.0°C) sensor = Intel PECI PCIe_x16: +46.0°C (low = +127.0°C, high = +127.0°C) sensor = thermistor VRM MOS: +63.0°C (low = +0.0°C, high = -122.0°C) sensor = thermistor EC_temp1: -55.0°C (low = +127.0°C, high = +127.0°C) sensor = thermistor or you can sudo apt install psensors if you prefer a GUI (also shows GPU stuff)
  15. Umm, that would be 100m below the surface as I'm in Kanada and were metrified as Jayz2cents would say and instead of the NSA it would be the CSE here and we're far too polite. They would have had to ask politely before building a bunker under my place
  16. Nope, We're in the suburbs and the nearest Police/Ambulance station is 4-5 km away & on a different part of the grid. The city we used to live in before it got amalgamated with a larger city beside us had the foresight to decree that all the power went underground. The only critical Government infrastructure in the area is fed directly from the provincial distribution network as well as the city but only as a backup.
  17. Yes, and my hydro company loves me right now: With no time-of-day rates the electricity is expensive all day long so I can't take advantage of the reduced rates for off-hours like I normally do (fold from 19:00-06:00 normally using a cron task to "finish the GPUs starting at 05:00 and "unpause" at 17:00)
  18. If I was going to do something like that it would involve battery banks, inverters and automatic transfer switches. Certainly nothing in the cloud to control any of my inside systems. We have one ioT device in the house and it's on it's own firewall segment. Power in my neighborhood is incredibly reliable. During the 1995 ice storms when power was knocked out to most of my City for 4-5 days I lost power for only 20 minutes. All our infrastructure is underground in my area.
  19. 70-80% of the Founder's Edition Power Limit on all cards with a +50MHz GPU offset and fans tweaked to under 75% where necessary. Yes, I could get another 5-10% more PPD but with an increase of 30-40% more power so it's not worth it.
  20. Yeah, I have a few systems that aren't on a UPS and you really want one for a Genny. I've seen what a Genny does to systems without UPS or Surge Suppression in the Data Centre at work and it's not pretty. I'd need a 3kW unit and it would be a pain to setup and arrange and would probably amount to the same amount of downtime to get it working. I'll still be able to watch media on the home theater as at least the UPSes for the Networking Gear and Home Theater have a couple of hours runtime but the folding systems it's barely enough to get them to shut-down cleanly. I'd rather spend that money on new GPUs
  21. So I got a call on my landline from my Hydro provider the other day. I was thinking they were calling to see why my electricity consumption was way up but it was just a pre-recorded message to notify me that starting at 02:00 local time this Sunday the power would be going out for up to 6 hours as they're re-arranging load at the local sub-station. Yikes. So I'm going to have to go dark before then and wait up for the power to come back or wait until Monday morning to start things up again.
×