Jump to content

Favebook

Member
  • Posts

    633
  • Joined

  • Last visited

Posts posted by Favebook

  1. 2 hours ago, Collin_S said:

    Thanks. So I am guessing increasing the voltage a bit won't fix the issue? I don't know exactly how its all measured in this case, but I know in general Power is Voltage * Amperage. Could better cables from the PSU help? I currently just have the relatively thin cable that came with the PSU.

    1. It might, a bit, I've heard and seen that voltage slider is not placebo anymore on 3000 series, but do not hold me by my word on that one.

    2. Yes, no, maybe? Who knows... Probably not. Cables are cables. If they are providing enough power, then there is nothing you can improve, maybe a better PSU could give you 0.1% more PPD or maybe not. But it is not worth testing either way.
    If they came with PSU, better not to change them, because you do not know if new ones will be compatible.
     

  2. 1 hour ago, Collin_S said:

    I have noticed when I run the OC Scanner in MSI Afterburner that it says that the GPU is power limited (even though I have the power limit set to max at 120%). The RTX 2070 (or at least the one I have) only has one 8 pin power connector. Is there anything I can do to overcome that power limit? And could this be also part of why increasing the GPU fan speed sometimes SEEMS to slow down folding, even though temp has gone down and clock speed went up a bit?

    Also, is it possible that higher GPU clock can actually start creating lower PPD? 

    1. No. Unless you plan on meddling with your card physically and/or with it's BIOS.

     

    2. Both I and @LAR_Systems have already explained you in previous posts what affects your PPD in your case and you shouldn't expect any good results out of the test you are doing that way.

     

    ELI5: Fans use power, fans need electricity = the faster fans spin the more power they need = More power they use, less power for GPU die = less PPD.

     

    3. No. Unless you are hitting thermal throttle but at that point clocks would drop.

     

    EDIT: Bolded part is most important part of your post.

  3. 10 hours ago, Collin_S said:

    Hahaha. That's a big temp drop. I've noticed something kind of strange while tinkering with fan speeds over the last hour: it seems like once my GPU goes above 1965 MHz the PPD Performance (at least according to the Folding in the Dark extension's DB performace comparison) starts going down. At 1965 it gets around 6-9% over the average of the database (on the current WU), but when it goes to 1980 it drops to about +4%, then at 1995 it goes to +2%. Any ideas why? could it need some more memory speed? I am just using stock memory clock (when the p2 state is off) and clock is just on the curve automatically made with MSI Afterburner's OC Scanner. Also is still at stock voltage, but the power limit is set to 120% and temp limit at 88C

    It's due to how different every % of WU can be. You shouldn't be looking at "current PPD" as even a moving mouse can reduce your PPD as your GPU renders that as well as background. Turning off your monitor or letting it go to sleep can get you up to 5% more PPD or even more. Browsing through windows file explorer can lower your PPD up to 4%, that is why they are averages and why you should always look at them. 

  4. 48 minutes ago, Collin_S said:

    Also, here's a suggestion related to the RAM speed like I mentioned before: You could add an optional field in the extension somewhere (whether it be on the web control screen or from the extension button) to add more statistics about your system, such as RAM quantity, speed, and timing, maybe number of sticks, etc. This would of course be more of an advanced user thing since likely most people who fold may not know that information, (and the data may not be able to be taken as definitely as the other stuff you directly record since users could enter incorrect data), but it could be interesting to see.

    Not sure why would he even need to collect that info. There aren't that many CPU folders anyway, sample size would be miniscule and not worth it. Additionally, RAM isn't even that important PPD-wise for database when you have so many different systems in many different environments:

    -some have dedicated folding rigs which do not use CPU at all apart from folding,

    -some fold with GPU in addition to CPU while some do not

    -some use CPU for easy tasks,

    -some browse a few hours a day,

    -some browse whole day,

    -some game on them

    -etc.

     

    RAM would be least useful stat in that a volatile database.

    GPUs are much easier to collect as their PPD is much higher and varies much less (even if you take into account gaming, overclocking and power limiting). While a simple youtube video might make your CPU go from 50k to 5k PPD (90% drop) (just an example, I am probably exaggerating or maybe I am not?).

    While when I use my GPU to fold and play WoW/CS:GO. I drop from 3M down to 2.4M in WoW (20% drop) and minimum of 1.6M with unlocked fps (45% drop).

    You see my point?

     

    Him collecting CPU data is already far-stretched.

  5. 5 minutes ago, Bitter said:

    Got the 1650! Popped it in, FAH picked it up but wouldn't fold on it, had to set openCL to 0 and Cuda to 1 manually. Took a few minuted but grabbed some work and hammered my CPU because i3, I need to swap in the i5 I guess. Overall PPD took a huge nose dive with both cards running, to about 500K PPD vs the lone P106-100 at 700K PPD. Currently trying to get JUST the 1650 to fold but fahcontrol is being wonky and doesn't seem to like to issue commands down to the fah process properly. It works fine from my laptop remotely, but at the actual PC it's being weird which is new in the past few days. Anyway, finally got the P106 paused and the 1650 folding by itself at 75W is doing a quickly estimated 600-700K PPD and barely making any noise to do so. I really think I'm going to grab a 2nd 1650 of the same cheap OEM style and just run one or two of those for the long term folding box. Now to procure an i5 S lower power CPU, I gave my 4590S away to my dad's PC build.

    Disable CPU folding if you plan to use 2 GPUs on low-tier CPU.

    Delete all slots. And then add first GPU>OK>add second GPU>OK>Save (do not edit gpu-index, opencl-index or cuda-index, just leave them all at -1), you will have much higher PPD, less heat (due to no CPU folding) and both of your cards will work without any problems IF you have correct drivers and good enough PSU.

  6. 6 minutes ago, d0ks said:

    set them to finish, then when they're done, shutdown client and restart. it  usually reassigns better units, but it's not fullproof

    That will literally not do anything. It doesn't matter what you do on client side* when server is the one that assigns WUs. It's just pure luck** which server you get and what WU type you are assigned by that server.

     

    What you are experiencing is most likely a placebo effect.

     

    *Unless you do something that you shouldn't be doing.

    **Most of the time.

  7. 46 minutes ago, GOTSpectrum said:

    What is everyone looking forward to after folding month is over, any specific games or projects you have been putting off to maximise that PPD. 

     

    I'm looking forward to finally getting back to streaming after having to take almost a year off due to my health! 

    I am preparing for semi-pro league in CS:GO. Had a break for a year and now I have to grind my way back to the top. Folding gave me an excuse to take a break for another month :)

  8. 1 hour ago, SodaDog said:

    May I suggest a way to schedule folding tasks? We have solar and time of use rates where electricity costs basically double so I’d love the ability to have my computer go all out when the sun shines and shut down when it gets pricy.

    That would require additional scripts which would probably require additional program downloads or google chrome admin access to your device. Not feasible as of now.

     

    1 hour ago, SodaDog said:

    Probably a long shot but I’d love to see a way to look up  and compare( Recorded PPD per GPU/CPU)  over (Manufacturer’s stated Wattage) in a sortable table. 

    That would be more realistic to make but not sure it's worth it. Current tables have more than enough info, however down the line, I expect @LAR_Systems will probably make this one too.

  9. 16 minutes ago, d0ks said:

    I only check the EOC every three hours, when the update occurs.

    Being constantly folding, i know what points to expect.

    If you are constantly folding then there is no need to check EOC every three hours. Just make sure your PCs are stable and everything else will be all right.

     

    As you can see in your logs, there are multiple lines similar to this one:

    19:02:18:WU01:FS00:Server responded WORK_ACK (400)
    19:02:18:WU01:FS00:Final credit estimate, 192551.00 points

    That means that server has got your WU and you will be awarded the points. From that point onward, you shouldn't think about it too much.

    Points will be put under your username in next few updates (usually the first one, but sometimes servers get glitchy and do not report it correctly to F@H website or EOC experiences a bug). However, there was never a case where someone submitted a WU but never got points. There were some rare cases where QRB wasn't calculated correctly with newer WUs, but that was usually fixed in matter of days/weeks. And as I already said multiple times, do not worry about it. If your GPUs were 100% folding during those 3 hours, you will get your points.

  10. 3 hours ago, Egg-Roll said:

    I'm going to have to take my 5700 xt offline after it is done its WU, something was/is making a awful noise that never used to exist, I'm sure it is from a capacitor. Right now it sounds like coil whine from the GPU as it is the only one running, but when paused I hear a endless looping buzz. I'll trouble shoot it when I get time but not right now, the gpu task is another 3 hours, there's potentially nothing wrong, but it didn't sound right about a hour ago, and no the water isn't leaking, loop's works fine (tested).

     

    It also doesn't make the noise in the bios.

    It is coil whine almost 100%. If it is not, check your pump, it might be making the noise. But I am 99% sure that it is coil whine. Either from PSU, Mobo or GPU. You could try to troubleshoot each by replacing them 1by1. I've had this problem since I made my PC back in 2017. I changed like 5-6 GPUs since then and all had coil whine, which means it's not them, rather it's PSU or Mobo. But since I am too lazy to do cable management again, I will not troubleshoot it further. But I will surely remember not to buy Corsair PSU nor Gigabyte Mobo again, even though it's probably bad luck that I got one of those.

    5 minutes ago, d0ks said:

    1332151298_Screenshot2020-11-01233947.jpg.5b82c7b1121cf9aa76b3ab94ca4769f0.jpg

     

    Is there something wrong with the points mechanism?

    My logs say that work is being uploaded but the EOC forum... doesn't.

    log1 2080 ti.txt 64.05 kB · 0 downloads log2 2060s.txt 41.72 kB · 0 downloads

    There is nothing wrong with your logs or points mechanism, do not worry about it too much. Check it once or twice a day on EOC and check your F@H Advanced Control whenever you want to see if everything is going smoothly.

     

    EOC updates only ever 3 hours. You cannot expect the points to show up whenever you reload. 

     

    If your client says that they updated, they updated, and you shouldn't worry from that point. The only problem that can happen on client (your) side is if upload or download gets stuck. Otherwise, there is nothing you can do. Servers go as fast as servers can go.

  11. 1 hour ago, Red :) said:

    Well, this is happening rn

    image.png.3dfd883b66075116f4b1162c5a39a516.png

    image.png.accf1d83d7e4d722ee5ce82dfbf6cd6d.png

    image.png.e3b94d665cb772dbe6d7aaecfa856513.png

     

    Also the advanced control is kinda weirdly updating

    Try restarting PC or only F@H, it can unstuck it, otherwise just wait.

    51 minutes ago, rodarkone said:

    Be careful, limiting the power will not prevent the GPU from spiking .. and at that point can cause system restart.

    Well, true, but not necessarily, I have an 2080 and 2080S on some low tier 500-600W PSU and it cannot handle them at 100%PL or above, but on 90% they've been folding since beginning of event w/o problems.

  12. 6 hours ago, Bitter said:

    What are the odds that both PC's get the same exact project and are both running it at the same time when set to any project? Different chunks but still, freaky!

     

    So either every single project the P106 has been getting since fixing the Cuda thing is just a really great PPD producer or it wasn't using Cuda that whole time, it's been up around 700+K PPD. The Vega56 is doing it's usual ~1M PPD with it's mild undervolt and slight underclock for higher efficiency.

    What you probably almost surely experienced is that they had same project, but different PRCG.

  13. 4 hours ago, LAR_Systems said:

    I image when the event is over there will be a slight drop in PPD as people set their power profiles back to "don't burn the house down" so their electric bill is not terrible all winter.

    For 2080S you shouldn't worry that much. I am running all 4 of mine at 70% PL with small OC which I will keep running even after event at same settings

    3 hours ago, NetoriusNick said:

    I also take it that mining cards can be used if flashed to stock?

    Of course. They might even work w/o flashing. Depends on card and bios that you are running

    3 hours ago, Shlouski said:

    I think this site is interesting and offers useful information, I mean no disrespect but its basically userbenchmark.com for folding, there are vast numbers of variables that can't be accounted for, so any info I see on there I take with a gain of salt.

    Its an impossible task for this site to eliminate the vast number of variables to achieve accurate numbers, but it does a good job under the circumstances, I'm impressed.

    I will repeat what has been said many times in this thread already. That database is up to date which is good reference for buying GPUs right now, and PPD is average of all OCs, PLs, BIOSes and any other settings that you could change. There is no database better and more up to date than this one IMHO. You cannot EVER eliminate all the variables UNLESS you make a database which would take EVERY variable in consideration. It would take much more time, much more energy, much more space and it wouldn't be worth it.

    What @LAR_Systems made of that database is damn perfect, and soon I am going to ask him for donation address so I can give something in return as his database as well as extension has helped me much more than anything else folding related.

    50 minutes ago, TheDailyProcrastinator said:

    Anyone else having WU issues? Completion time has (seemingly) randomly gone up significantly. My 1080 is taking over 4hrs now, vs 2-2.5hrs it would take previously. 

    That is not an issue. It's just WU difference. Some can take 20min, some can take FEW hours. Get used to it, it's luck of the draw :).

  14. 7 hours ago, yaboistar said:

    i'd just like to take the time to say a colossal "fuck you" to the UK's car tax system because for what i just paid today i could've straight up bought a 2080ti at a price i saw one go for on fleabay yesterday

    Just be happy that you are not in Serbia. 3.0L would cost you ~ 1000 EUR per year for tax. I wanted to get Camaro 6.2L imported but then I realized that I am not a charity to gift the money to this damn country.

     

    Therefore, I'll have to settle with 2.0L until I run away with it.

  15. 1 hour ago, RollinLower said:

    i bet those drives you hot swapped where mostly SATA based. if you want to hotswap your NVMe drives, you need to have a motherboard that supports PCIe hotswap. and afaik that's mostly reserved for server systems.

    True, they were SATA. I wouldn't know which motherboards support PCIe hot swap, but if it is true that only server systems have it, that would make it much harder then. Well, anyway, you can still shutdown your PC, put NVMe in and get it up and running in under 5 min in most cases.

    In your case @shaz2sxy your NVMe can wait until end of folding month I guess :).

×