Jump to content

porina

Member
  • Posts

    15,664
  • Joined

  • Last visited

Reputation Activity

  1. Like
    porina got a reaction from leadeater in BOINC Pentathlon 2024   
    Speak for yourself 🙂 I just started off some system and one was estimating finishing a little before the start. Pausing it a little to make sure that doesn't happen. That reminds me, I also need to switch to LTT team for PrimeGrid as I'm normally with another there.
     
    Also, these units are bringing out coil whine in my PCs that I've never heard before. What is it doing? 😄 
  2. Like
    porina reacted to momowhyy in Benchmark and CPU Threads Error   
    Thanks a lott ! It works perfectly fine and it back to 6 core 12 threads
  3. Agree
    porina got a reaction from unclewebb in Benchmark and CPU Threads Error   
    The number of cores could be limited in either the BIOS or Windows. If you have tried resetting BIOS and that doesn't help, let's check Windows.
     
    In Windows, run msconfig and go to "boot" tab, and press the Advanced Options button. You should see the above. Is there a CPU limit applied in top left?
  4. Like
    porina reacted to LAwLz in MSI confirms focus on GeForce RTX, as MSI Radeon cards are disappearing from stores   
    I *think* it is a combination of:
    1) Nvidia working with far more system integrators and thus getting a much larger part of the prebuilt system buyers (which is the majority).
    2) AMD only being competitive in certain aspects but being behind in others. 
     
    I also think that people on this forum overestimate the value proposition of AMD. From what I have seen, although this varies a lot by price point and country, Nvidia and AMD have fairly similar price-to-performance ratios. I think the real myth is the whole "Nvidia is expensive and AMD are the price:performance kings".
  5. Like
    porina got a reaction from Egon3 in BOINC Pentathlon 2024   
    Generally you only want to run on real cores. You can either set the max threads per task accordingly, or adjust the use CPU % in boinc settings to lower it. You'll have to do it after this unit completes as it can't be changed once sent out. 
     
    One task. This falls under the mesh cache special case. They have a lot of L2 so it'll be fine. If they didn't have the L2 it'll still be 1 task, but it would be more memory bandwidth impacted.
  6. Like
    porina got a reaction from leadeater in BOINC Pentathlon 2024   
    Generally you only want to run on real cores. You can either set the max threads per task accordingly, or adjust the use CPU % in boinc settings to lower it. You'll have to do it after this unit completes as it can't be changed once sent out. 
     
    One task. This falls under the mesh cache special case. They have a lot of L2 so it'll be fine. If they didn't have the L2 it'll still be 1 task, but it would be more memory bandwidth impacted.
  7. Like
    porina got a reaction from momowhyy in Benchmark and CPU Threads Error   
    Run CPU-Z and post a screenshot of what you see on the CPU tab here.
  8. Agree
    porina got a reaction from LAwLz in MSI confirms focus on GeForce RTX, as MSI Radeon cards are disappearing from stores   
    Doesn't their gaming category also include console chips? Feels like we're in the stagnant period for consoles as anyone who is buying this gen likely already has, so there's only smaller incremental sales or replacements. Unless there is some way to separate out dGPUs it is difficult to tell what is going on.
     
    As much as some hate on it, I do think it is the best indicator we have what PC gamers have and use. Or more precisely, Steam gamers. The question has long been, where is current gen AMD? Forums like this with a high self build crowd likely sees a distortion of AMD being more popular than it is in the wider market.
     
    It depends on how you measure it. Performance numbers are easy. Image quality is less so. IMO FSR2/3 as it currently stands is a distant 3rd place behind DLSS and XeSS in image quality due to lack of temporal stability. AMD's teaser of FSR 3.1 might finally address that, but we'll have to wait and see.
     
    I like TechPowerUp since they do averages across several titles, as individual titles can show greater variation. They put 4070S about 25% faster than 7900GRE in RT games at 1080p and 1440p, dropping to only a 4% advantage at 4k. In raster 4070S is 1% to 3% slower across the resolutions so basically about the same.
     
    In the UK current cheapest in stock GRE is 529 vs 544 for the Super so close enough. For about the same price and raster performance, the Super gets you higher average RT performance as well as NVIDIA features. About the only advantage the red option has is 16GB vs 12GB, which might be a contributor to the 4k RT perf closing the gap.
     
    EVGA sold GPUs in the UK too, but they failed to differentiate themselves from other makes and didn't get significant traction.
     
    This is one of those games where once you get to adequate fps you're done. More doesn't really add value. Outside of 4k you don't have a difficult problem reaching that.
  9. Agree
    porina got a reaction from starsmine in MSI confirms focus on GeForce RTX, as MSI Radeon cards are disappearing from stores   
    Doesn't their gaming category also include console chips? Feels like we're in the stagnant period for consoles as anyone who is buying this gen likely already has, so there's only smaller incremental sales or replacements. Unless there is some way to separate out dGPUs it is difficult to tell what is going on.
     
    As much as some hate on it, I do think it is the best indicator we have what PC gamers have and use. Or more precisely, Steam gamers. The question has long been, where is current gen AMD? Forums like this with a high self build crowd likely sees a distortion of AMD being more popular than it is in the wider market.
     
    It depends on how you measure it. Performance numbers are easy. Image quality is less so. IMO FSR2/3 as it currently stands is a distant 3rd place behind DLSS and XeSS in image quality due to lack of temporal stability. AMD's teaser of FSR 3.1 might finally address that, but we'll have to wait and see.
     
    I like TechPowerUp since they do averages across several titles, as individual titles can show greater variation. They put 4070S about 25% faster than 7900GRE in RT games at 1080p and 1440p, dropping to only a 4% advantage at 4k. In raster 4070S is 1% to 3% slower across the resolutions so basically about the same.
     
    In the UK current cheapest in stock GRE is 529 vs 544 for the Super so close enough. For about the same price and raster performance, the Super gets you higher average RT performance as well as NVIDIA features. About the only advantage the red option has is 16GB vs 12GB, which might be a contributor to the 4k RT perf closing the gap.
     
    EVGA sold GPUs in the UK too, but they failed to differentiate themselves from other makes and didn't get significant traction.
     
    This is one of those games where once you get to adequate fps you're done. More doesn't really add value. Outside of 4k you don't have a difficult problem reaching that.
  10. Like
    porina reacted to Stahlmann in Elite Dangerous adds the option to purchase pre-built ships for money, sharply increases many store prices at the same time   
    When I was a teenager, F2P games were my lifeblood. I didn't have any money, so these games were the only ones I could really play. Nowadays I'm disgusted and lose interest in any game at the slightest hint of pay2win or pay4convenience. Because at that point the developers are actively incentivized to make the game less convenient, more grindy, or generally worse for the people who don't pay.
  11. Agree
    porina got a reaction from Arxxuss in Elite Dangerous adds the option to purchase pre-built ships for money, sharply increases many store prices at the same time   
    I bought this game at the lifetime update tier on kickstarter although I haven't been active in it for about 6 years. Wait, how old is this game now? 10 years in December.
     
    I always wondered how sustainable it would be as it was a one time payment to buy (same again for expansions) it but there was no ongoing sub. It is live service even in "single player" mode so they would need ongoing revenue to keep it going. Cosmetics were only going to go so far.
     
    I have no idea what the current playerbase is like as I dropped out of the community. We might see the trend of other MMOs as they age where they try to get in new players by going f2p and rely on microtransactions.
  12. Like
    porina reacted to Stahlmann in Budget productivity monitor suggestions wanted   
    In that case stop and get the iiyama monitor before it gets out of hand.
  13. Funny
    porina got a reaction from Stahlmann in Budget productivity monitor suggestions wanted   
    Showing my age, I recall they were well regarded during the CRT era although I never owned one of their products. There are the other usual business brands in that area too as well as some entry level gaming ones.
     
    As said the only reason I'm considering 4k at all is that I could view my 4k video natively without having to go to my TV elsewhere. Using 1440p hasn't been a problem anyway as that is what I was using before: 1440p TN gaming display, but it is a really good TN! This is in part why I said I'm not tied to looking at IPS.
     
    However, 4k would go above my original budget statement, and if I do that, I'm on the slippery slope where I look at slightly higher models and things escalate.
     
    Edit: someone stop me. I'm now looking at a new gaming display, which would free up my old gaming display for use on the productivity system. I need more AFK time.
  14. Like
    porina reacted to Stahlmann in Budget productivity monitor suggestions wanted   
    Tbh, there's just not much the monitor brands can screw up with displays like this, so there's not as much emphasis on 3rd party reviews.
    iiyama is a reputable brand though. We just don't hear much about them anymore because they're mostly B2B these days.
    100Hz is a nice bonus though, even for work monitors.
     
    32" 4K might be worth considering for productivity. Best case you can use lower scaling for more screen real estate, worst case you use bigger scaling and have better text clarity.
     
    At 28", I'd say 4K generally isn't worth the premium over 1440p.
  15. Like
    porina got a reaction from HoldSquat in BOINC Pentathlon 2024   
    Marathon project has been announced. It is Primegrid's Cullen subproject. For those not familiar, it is similar to running Prime95 large FFT so prepare yourself!
     
    Rough optimisation guide:
    The tasks currently use about 21MB of data (see edit at bottom for updates) and generally run fastest if you run multiple tasks in total not exceeding the L3 cache size of your CPU. If one task exceeds the L3 cache of your CPU, run 1 task. The number of threads per task can be controlled in Primegrid settings. If you have multiple different systems, you can use more than one venue. Generally speaking run it only on real cores so don't fill up all threads.
     
    e.g. on my 7800X3D with 96MB of L3 cache, in theory I should run 4 units at the same time for maximum performance.
     
    This can be checked by running a benchmark in Prime95
     

    Try using the settings above as a starting point. 2688 is the current largest FFT size used by the project. This may go up during the challenge! Note the software can use different optimal FFT sizes depending on CPU architecture. If you get no result, try lowering the minimum size a bit to find the nearest lower FFT size.
    Tick the 2nd check box, as the project uses LLR. It doesn't seem to make much difference so probably doesn't matter if you forget.
    Uncheck "Benchmark Hyperthreading" as generally it doesn't help but still increases power consumption on Intel CPUs. You can try it anyway if you want to see what happens.
    "Number of workers" is the number of tasks to try at the same time. On my system it defaulted to 1, 2, 8. You can edit it for other divisions of your core count. I added 4 here, as that is interesting.
    You can reduce the time to run at the bottom to the minimum 5 seconds to make it a bit faster.
     
    You should see output something like the above. Basically you want the biggest number after throughput. As theory predicted, 4 tasks (4 workers) at the same time gave the highest throughput, but note it isn't that much faster than one task using all the cores, about 4%. You may chose slightly lower throughput if you want shorter individual task runtimes, which could be handy at the end if you want to squeeze out a unit before the deadline.
     
    Note benchmark numbers can't be directly compared at different FFT sizes! They get lower as FFT size gets bigger.
     
    Special considerations:
    For CPUs with split L3 caches (Zen 2 and earlier with 6 or more cores, Zen 3 and newer with 12 or more cores) you want to consider each CCX as its own CPU initially. Generally you'll probably run 1 task per CCX if the cache is big enough. If not, one task per whole CPU. 3rd party software like Process Lasso can help prevent Windows messing up scheduling but I can't advise the exact settings required. Similar to above applies if you have NUMA or multi-socket systems. Because the software doesn't generally benefit from HT/SMT, if you have it enabled on your system you should set the BOINC client to use the % of cores vs threads. e.g. if you have a 8 core 16 thread system, set BOINC client to use 50% of CPU cores. Alternatively you can turn off HT/SMT and run 100%. I don't have experience with hybrid Intel CPUs. You'll have to test if using E cores helps or hinders. I'd guess only using P cores is probably optimal. If you have Intel mesh cache CPUs (e.g. Skylake-X, Cascade Lake-X and similar Xeons) count L2 + L3 instead of L3 alone. These are heavily weighted to L2 and it does help. On most other CPUs L2 is insignificant. If you don't have enough cache for 1 task, it will use ram and therefore ram performance will impact overall performance. Bandwidth generally helps more than timings.  
    Edit: as expected the FFT size is increasing over time.
    Max FFT size and associated data size per work unit:
    2 May AM 2688k - 21MB
    4 May PM 2880k - 22.5MB
  16. Informative
    porina got a reaction from Chachunka in BOINC Pentathlon 2024   
    Marathon project has been announced. It is Primegrid's Cullen subproject. For those not familiar, it is similar to running Prime95 large FFT so prepare yourself!
     
    Rough optimisation guide:
    The tasks currently use about 21MB of data (see edit at bottom for updates) and generally run fastest if you run multiple tasks in total not exceeding the L3 cache size of your CPU. If one task exceeds the L3 cache of your CPU, run 1 task. The number of threads per task can be controlled in Primegrid settings. If you have multiple different systems, you can use more than one venue. Generally speaking run it only on real cores so don't fill up all threads.
     
    e.g. on my 7800X3D with 96MB of L3 cache, in theory I should run 4 units at the same time for maximum performance.
     
    This can be checked by running a benchmark in Prime95
     

    Try using the settings above as a starting point. 2688 is the current largest FFT size used by the project. This may go up during the challenge! Note the software can use different optimal FFT sizes depending on CPU architecture. If you get no result, try lowering the minimum size a bit to find the nearest lower FFT size.
    Tick the 2nd check box, as the project uses LLR. It doesn't seem to make much difference so probably doesn't matter if you forget.
    Uncheck "Benchmark Hyperthreading" as generally it doesn't help but still increases power consumption on Intel CPUs. You can try it anyway if you want to see what happens.
    "Number of workers" is the number of tasks to try at the same time. On my system it defaulted to 1, 2, 8. You can edit it for other divisions of your core count. I added 4 here, as that is interesting.
    You can reduce the time to run at the bottom to the minimum 5 seconds to make it a bit faster.
     
    You should see output something like the above. Basically you want the biggest number after throughput. As theory predicted, 4 tasks (4 workers) at the same time gave the highest throughput, but note it isn't that much faster than one task using all the cores, about 4%. You may chose slightly lower throughput if you want shorter individual task runtimes, which could be handy at the end if you want to squeeze out a unit before the deadline.
     
    Note benchmark numbers can't be directly compared at different FFT sizes! They get lower as FFT size gets bigger.
     
    Special considerations:
    For CPUs with split L3 caches (Zen 2 and earlier with 6 or more cores, Zen 3 and newer with 12 or more cores) you want to consider each CCX as its own CPU initially. Generally you'll probably run 1 task per CCX if the cache is big enough. If not, one task per whole CPU. 3rd party software like Process Lasso can help prevent Windows messing up scheduling but I can't advise the exact settings required. Similar to above applies if you have NUMA or multi-socket systems. Because the software doesn't generally benefit from HT/SMT, if you have it enabled on your system you should set the BOINC client to use the % of cores vs threads. e.g. if you have a 8 core 16 thread system, set BOINC client to use 50% of CPU cores. Alternatively you can turn off HT/SMT and run 100%. I don't have experience with hybrid Intel CPUs. You'll have to test if using E cores helps or hinders. I'd guess only using P cores is probably optimal. If you have Intel mesh cache CPUs (e.g. Skylake-X, Cascade Lake-X and similar Xeons) count L2 + L3 instead of L3 alone. These are heavily weighted to L2 and it does help. On most other CPUs L2 is insignificant. If you don't have enough cache for 1 task, it will use ram and therefore ram performance will impact overall performance. Bandwidth generally helps more than timings.  
    Edit: as expected the FFT size is increasing over time.
    Max FFT size and associated data size per work unit:
    2 May AM 2688k - 21MB
    4 May PM 2880k - 22.5MB
  17. Informative
    porina got a reaction from Lightwreather in BOINC Pentathlon 2024   
    Marathon project has been announced. It is Primegrid's Cullen subproject. For those not familiar, it is similar to running Prime95 large FFT so prepare yourself!
     
    Rough optimisation guide:
    The tasks currently use about 21MB of data (see edit at bottom for updates) and generally run fastest if you run multiple tasks in total not exceeding the L3 cache size of your CPU. If one task exceeds the L3 cache of your CPU, run 1 task. The number of threads per task can be controlled in Primegrid settings. If you have multiple different systems, you can use more than one venue. Generally speaking run it only on real cores so don't fill up all threads.
     
    e.g. on my 7800X3D with 96MB of L3 cache, in theory I should run 4 units at the same time for maximum performance.
     
    This can be checked by running a benchmark in Prime95
     

    Try using the settings above as a starting point. 2688 is the current largest FFT size used by the project. This may go up during the challenge! Note the software can use different optimal FFT sizes depending on CPU architecture. If you get no result, try lowering the minimum size a bit to find the nearest lower FFT size.
    Tick the 2nd check box, as the project uses LLR. It doesn't seem to make much difference so probably doesn't matter if you forget.
    Uncheck "Benchmark Hyperthreading" as generally it doesn't help but still increases power consumption on Intel CPUs. You can try it anyway if you want to see what happens.
    "Number of workers" is the number of tasks to try at the same time. On my system it defaulted to 1, 2, 8. You can edit it for other divisions of your core count. I added 4 here, as that is interesting.
    You can reduce the time to run at the bottom to the minimum 5 seconds to make it a bit faster.
     
    You should see output something like the above. Basically you want the biggest number after throughput. As theory predicted, 4 tasks (4 workers) at the same time gave the highest throughput, but note it isn't that much faster than one task using all the cores, about 4%. You may chose slightly lower throughput if you want shorter individual task runtimes, which could be handy at the end if you want to squeeze out a unit before the deadline.
     
    Note benchmark numbers can't be directly compared at different FFT sizes! They get lower as FFT size gets bigger.
     
    Special considerations:
    For CPUs with split L3 caches (Zen 2 and earlier with 6 or more cores, Zen 3 and newer with 12 or more cores) you want to consider each CCX as its own CPU initially. Generally you'll probably run 1 task per CCX if the cache is big enough. If not, one task per whole CPU. 3rd party software like Process Lasso can help prevent Windows messing up scheduling but I can't advise the exact settings required. Similar to above applies if you have NUMA or multi-socket systems. Because the software doesn't generally benefit from HT/SMT, if you have it enabled on your system you should set the BOINC client to use the % of cores vs threads. e.g. if you have a 8 core 16 thread system, set BOINC client to use 50% of CPU cores. Alternatively you can turn off HT/SMT and run 100%. I don't have experience with hybrid Intel CPUs. You'll have to test if using E cores helps or hinders. I'd guess only using P cores is probably optimal. If you have Intel mesh cache CPUs (e.g. Skylake-X, Cascade Lake-X and similar Xeons) count L2 + L3 instead of L3 alone. These are heavily weighted to L2 and it does help. On most other CPUs L2 is insignificant. If you don't have enough cache for 1 task, it will use ram and therefore ram performance will impact overall performance. Bandwidth generally helps more than timings.  
    Edit: as expected the FFT size is increasing over time.
    Max FFT size and associated data size per work unit:
    2 May AM 2688k - 21MB
    4 May PM 2880k - 22.5MB
  18. Informative
    porina got a reaction from Beskamir in No one reads the fine print - Discord quietly tries to block your ability to sue them   
    Only skimming the details, the forced arbitration clause only applies to US residents. Which is a lot of people for sure, but people in the rest of world are not affected by this. Even if you are affected in the US, and if you do not opt out, there are still exceptions that may apply.
  19. Informative
    porina got a reaction from Favebook in BOINC Pentathlon 2024   
    Marathon project has been announced. It is Primegrid's Cullen subproject. For those not familiar, it is similar to running Prime95 large FFT so prepare yourself!
     
    Rough optimisation guide:
    The tasks currently use about 21MB of data (see edit at bottom for updates) and generally run fastest if you run multiple tasks in total not exceeding the L3 cache size of your CPU. If one task exceeds the L3 cache of your CPU, run 1 task. The number of threads per task can be controlled in Primegrid settings. If you have multiple different systems, you can use more than one venue. Generally speaking run it only on real cores so don't fill up all threads.
     
    e.g. on my 7800X3D with 96MB of L3 cache, in theory I should run 4 units at the same time for maximum performance.
     
    This can be checked by running a benchmark in Prime95
     

    Try using the settings above as a starting point. 2688 is the current largest FFT size used by the project. This may go up during the challenge! Note the software can use different optimal FFT sizes depending on CPU architecture. If you get no result, try lowering the minimum size a bit to find the nearest lower FFT size.
    Tick the 2nd check box, as the project uses LLR. It doesn't seem to make much difference so probably doesn't matter if you forget.
    Uncheck "Benchmark Hyperthreading" as generally it doesn't help but still increases power consumption on Intel CPUs. You can try it anyway if you want to see what happens.
    "Number of workers" is the number of tasks to try at the same time. On my system it defaulted to 1, 2, 8. You can edit it for other divisions of your core count. I added 4 here, as that is interesting.
    You can reduce the time to run at the bottom to the minimum 5 seconds to make it a bit faster.
     
    You should see output something like the above. Basically you want the biggest number after throughput. As theory predicted, 4 tasks (4 workers) at the same time gave the highest throughput, but note it isn't that much faster than one task using all the cores, about 4%. You may chose slightly lower throughput if you want shorter individual task runtimes, which could be handy at the end if you want to squeeze out a unit before the deadline.
     
    Note benchmark numbers can't be directly compared at different FFT sizes! They get lower as FFT size gets bigger.
     
    Special considerations:
    For CPUs with split L3 caches (Zen 2 and earlier with 6 or more cores, Zen 3 and newer with 12 or more cores) you want to consider each CCX as its own CPU initially. Generally you'll probably run 1 task per CCX if the cache is big enough. If not, one task per whole CPU. 3rd party software like Process Lasso can help prevent Windows messing up scheduling but I can't advise the exact settings required. Similar to above applies if you have NUMA or multi-socket systems. Because the software doesn't generally benefit from HT/SMT, if you have it enabled on your system you should set the BOINC client to use the % of cores vs threads. e.g. if you have a 8 core 16 thread system, set BOINC client to use 50% of CPU cores. Alternatively you can turn off HT/SMT and run 100%. I don't have experience with hybrid Intel CPUs. You'll have to test if using E cores helps or hinders. I'd guess only using P cores is probably optimal. If you have Intel mesh cache CPUs (e.g. Skylake-X, Cascade Lake-X and similar Xeons) count L2 + L3 instead of L3 alone. These are heavily weighted to L2 and it does help. On most other CPUs L2 is insignificant. If you don't have enough cache for 1 task, it will use ram and therefore ram performance will impact overall performance. Bandwidth generally helps more than timings.  
    Edit: as expected the FFT size is increasing over time.
    Max FFT size and associated data size per work unit:
    2 May AM 2688k - 21MB
    4 May PM 2880k - 22.5MB
  20. Informative
    porina got a reaction from leadeater in BOINC Pentathlon 2024   
    7920X 12 core Skylake-X bench ~989 1 worker, just over 7.5 hours a unit estimated run time.
    11700k 8 core Rocket Lake bench 531 1 worker, about 13.5 hours estimated
     
    I didn't run on 7800X3D (yet) but based on the bench compared to above, should be under 6 hours a unit throughput.
     
    I'm guessing they picked these long units so the server doesn't get killed by an insane number of smaller units. One small check unit gets generated per long main unit so there is no escaping that.
  21. Like
    porina got a reaction from leadeater in BOINC Pentathlon 2024   
    Marathon project has been announced. It is Primegrid's Cullen subproject. For those not familiar, it is similar to running Prime95 large FFT so prepare yourself!
     
    Rough optimisation guide:
    The tasks currently use about 21MB of data (see edit at bottom for updates) and generally run fastest if you run multiple tasks in total not exceeding the L3 cache size of your CPU. If one task exceeds the L3 cache of your CPU, run 1 task. The number of threads per task can be controlled in Primegrid settings. If you have multiple different systems, you can use more than one venue. Generally speaking run it only on real cores so don't fill up all threads.
     
    e.g. on my 7800X3D with 96MB of L3 cache, in theory I should run 4 units at the same time for maximum performance.
     
    This can be checked by running a benchmark in Prime95
     

    Try using the settings above as a starting point. 2688 is the current largest FFT size used by the project. This may go up during the challenge! Note the software can use different optimal FFT sizes depending on CPU architecture. If you get no result, try lowering the minimum size a bit to find the nearest lower FFT size.
    Tick the 2nd check box, as the project uses LLR. It doesn't seem to make much difference so probably doesn't matter if you forget.
    Uncheck "Benchmark Hyperthreading" as generally it doesn't help but still increases power consumption on Intel CPUs. You can try it anyway if you want to see what happens.
    "Number of workers" is the number of tasks to try at the same time. On my system it defaulted to 1, 2, 8. You can edit it for other divisions of your core count. I added 4 here, as that is interesting.
    You can reduce the time to run at the bottom to the minimum 5 seconds to make it a bit faster.
     
    You should see output something like the above. Basically you want the biggest number after throughput. As theory predicted, 4 tasks (4 workers) at the same time gave the highest throughput, but note it isn't that much faster than one task using all the cores, about 4%. You may chose slightly lower throughput if you want shorter individual task runtimes, which could be handy at the end if you want to squeeze out a unit before the deadline.
     
    Note benchmark numbers can't be directly compared at different FFT sizes! They get lower as FFT size gets bigger.
     
    Special considerations:
    For CPUs with split L3 caches (Zen 2 and earlier with 6 or more cores, Zen 3 and newer with 12 or more cores) you want to consider each CCX as its own CPU initially. Generally you'll probably run 1 task per CCX if the cache is big enough. If not, one task per whole CPU. 3rd party software like Process Lasso can help prevent Windows messing up scheduling but I can't advise the exact settings required. Similar to above applies if you have NUMA or multi-socket systems. Because the software doesn't generally benefit from HT/SMT, if you have it enabled on your system you should set the BOINC client to use the % of cores vs threads. e.g. if you have a 8 core 16 thread system, set BOINC client to use 50% of CPU cores. Alternatively you can turn off HT/SMT and run 100%. I don't have experience with hybrid Intel CPUs. You'll have to test if using E cores helps or hinders. I'd guess only using P cores is probably optimal. If you have Intel mesh cache CPUs (e.g. Skylake-X, Cascade Lake-X and similar Xeons) count L2 + L3 instead of L3 alone. These are heavily weighted to L2 and it does help. On most other CPUs L2 is insignificant. If you don't have enough cache for 1 task, it will use ram and therefore ram performance will impact overall performance. Bandwidth generally helps more than timings.  
    Edit: as expected the FFT size is increasing over time.
    Max FFT size and associated data size per work unit:
    2 May AM 2688k - 21MB
    4 May PM 2880k - 22.5MB
  22. Agree
    porina got a reaction from thechinchinsong in Intel says "Buy an overclockable motherboard that disables Current Excursion Protection, Set PL1 to 4000 amps, and your i9-14900KS may burn out"   
    If one manufacturer does something to get ahead, everyone else has to do it to keep up. So they end up all about the same anyway.
  23. Agree
    porina got a reaction from Sauron in Intel says "Buy an overclockable motherboard that disables Current Excursion Protection, Set PL1 to 4000 amps, and your i9-14900KS may burn out"   
    Overclocking has always been at the user's risk. The problem we have is that mobo manufacturers haven't made it clear what settings would count as overclocking and/or setting it as default. Even on an AM5 build I recently got I might have the opposite problem. Asus description for a setting in bios was "overclock CPU and ram for more performance" so I disabled it. The CPU wouldn't turbo. Put that setting back to Auto, I get turbo.
  24. Like
    porina got a reaction from Stin6667 in transistor broke while unboxing   
    It is a capacitor. Often used for power smoothing, but has many other uses too. C749 is the placement on the board, not the model of the device itself.
     
    Can you take a better photo of the pad it came off without it in the way? I can't tell from the existing photo but it could be bad soldering and therefore a manufacturing defect.
  25. Agree
    porina got a reaction from StDragon in Intel says "Buy an overclockable motherboard that disables Current Excursion Protection, Set PL1 to 4000 amps, and your i9-14900KS may burn out"   
    If one manufacturer does something to get ahead, everyone else has to do it to keep up. So they end up all about the same anyway.
×