Jump to content

Intel 12th Gen Core Alder Lake for Desktops: Top SKUs Only, Coming November 4th +Z690 Chipset

Lightwreather
11 hours ago, Jito463 said:

The better solution would be for Intel to just allow their CPUs to reduce their power draw at idle.

thats.. literally what they are doing here..

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, leadeater said:

Edit: Never mind, scratch this, seems that actually is around that a 11700K system will idle at. Still seems oddly high though 🤷‍♂️

 

9839_20_intel-core-i7-11700k-cpu-review.

What kinda system draws 240W in idle?

My 3900x draws about 70-90W on idle with one GPU and each GPU adds about 20-30 to that

 

Edit: measured from the wall

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Moonzy said:

What kinda system draws 240W in idle?

My 3900x draws about 70-90W on idle with one GPU and each GPU adds about 20-30 to that

 

Edit: measured from the wall

I know, I agree but it actually seems to be fairly common so 🤷‍♂️

 

My old AF 4930K idle package power is 78W and 6800XT 30W (software reporting) however I know my actual from the wall power is much higher but I'd have to shut it down and get some baselines on my UPS power draw to give actual figure as other things are powered by it too.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, poochyena said:

thats.. literally what they are doing here..

No, they're creating completely separate cores to run on, requiring a complete revamp of the Windows scheduler.  I'm talking about limiting the power draw of the performance cores when idle.  Even if I set my 3800x to run at high performance - meaning 100% minimum for the CPU clocks - it still only draws 90-100w at idle from the wall.  So why is Intel drawing nearly 3x as much at idle (based on @Kisai and @leadeater's posts)?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Jito463 said:

I'm talking about limiting the power draw of the performance cores when idle.

how?

Link to comment
Share on other sites

Link to post
Share on other sites

@Jito463 @poochyena

 

Just to FYI power stats of multiple different Intel based servers and their 20min power graphs, specs of each posted with graph.

 

2x E5-2643v4, 64GB, 4x SATA SSD, 4x NVMe SSD, 4x 10Gb

Spoiler

image.thumb.png.c1de2985809757a8f1a6f2ee4c7a2b33.png

Minimum: 116W

 

2x E5-2643v4, 64GB, 4x SATA SSD, 4x NVMe SSD, 4x 10Gb

Spoiler

image.thumb.png.15741dc80953e46f572a32ab188c1da7.png

Minimum: 145W

 

2x E5-2643v4, 64GB, 4x SATA SSD, 4x NVMe SSD, 4x 10Gb

Spoiler

image.thumb.png.9ad68fab959378b7673de715d0c485c7.png

Minimum: 126W

 

2x 6254, 384GB, 2x SATA SSD, 4x 25Gb

Spoiler

image.thumb.png.d4f19d6883893508c3a45a350166661f.png

Minimum: 102W

 

2x 6242R, 384GB, 2x SATA SSD, 4x 25Gb

Spoiler

image.thumb.png.33e35e733a0d820bdde665aaafd2b8a2.png

Minimum: 277W

 

2x 6242R, 384GB, 2x SATA SSD

Spoiler

image.thumb.png.f13dfbe2c662d56b7bbf6bb81b22a522.png

Minimum: 282W

 

2x E5-2690v4, 384GB, 2x 10K RPM HDD, 4x 10Gb

Spoiler

image.thumb.png.de9dc32e57b4b328817f056612d66700.png

Minimum: 94W

 

2x E5-2667, 64GB,  4x SATA SSD,  12x 10K RPM HDD, 4x 10Gb

Spoiler

image.thumb.png.4b71966dde0fa5795617e948111b68aa.png

Minimum: 144W

 

2x E5-2630v2, 32GB, 2x 10K RPM HDD, 10 x NL-SAS HDD, 2x 10Gb

Spoiler

image.thumb.png.708346c3cb9e93148886dc2702159641.png

Minimum: 197W

 

4x E7-8890v4, 256GB, 2x 10K RPM HDD, 2x 10Gb

Spoiler

image.thumb.png.75cb58cda985177d8b906207da313b3f.png

Minimum: 299W

 

4x E5-4650, 512GB, 2x 10K RPM HDD, 2x 10Gb

Spoiler

image.thumb.png.2db7a07b1148b81ea158f1664325ae22.png

Minimum: 183W

 

Based on the above high system idle is most likely system/platform configuration issue and if around 250W then massive improvements could be made with the current system changing only BIOS settings.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

@Jito463 @poochyena

 

Just to FYI power stats of multiple different Intel based servers and their 20min power graphs, specs of each posted with graph.

 

2x E5-2643v4, 64GB, 4x SATA SSD, 4x NVMe SSD, 4x 10Gb

  Reveal hidden contents

image.thumb.png.c1de2985809757a8f1a6f2ee4c7a2b33.png

Minimum: 116W

 

2x E5-2643v4, 64GB, 4x SATA SSD, 4x NVMe SSD, 4x 10Gb

  Reveal hidden contents

image.thumb.png.15741dc80953e46f572a32ab188c1da7.png

Minimum: 145W

 

2x E5-2643v4, 64GB, 4x SATA SSD, 4x NVMe SSD, 4x 10Gb

  Reveal hidden contents

image.thumb.png.9ad68fab959378b7673de715d0c485c7.png

Minimum: 126W

 

2x 6254, 384GB, 2x SATA SSD, 4x 25Gb

  Reveal hidden contents

image.thumb.png.d4f19d6883893508c3a45a350166661f.png

Minimum: 102W

 

2x 6242R, 384GB, 2x SATA SSD, 4x 25Gb

  Reveal hidden contents

image.thumb.png.33e35e733a0d820bdde665aaafd2b8a2.png

Minimum: 277W

 

2x 6242R, 384GB, 2x SATA SSD

  Reveal hidden contents

image.thumb.png.f13dfbe2c662d56b7bbf6bb81b22a522.png

Minimum: 282W

 

2x E5-2690v4, 384GB, 2x 10K RPM HDD, 4x 10Gb

  Reveal hidden contents

image.thumb.png.de9dc32e57b4b328817f056612d66700.png

Minimum: 94W

 

2x E5-2667, 64GB,  4x SATA SSD,  12x 10K RPM HDD, 4x 10Gb

  Reveal hidden contents

image.thumb.png.4b71966dde0fa5795617e948111b68aa.png

Minimum: 144W

 

2x E5-2630v2, 32GB, 2x 10K RPM HDD, 10 x NL-SAS HDD, 2x 10Gb

  Reveal hidden contents

image.thumb.png.708346c3cb9e93148886dc2702159641.png

Minimum: 197W

 

4x E7-8890v4, 256GB, 2x 10K RPM HDD, 2x 10Gb

  Reveal hidden contents

image.thumb.png.75cb58cda985177d8b906207da313b3f.png

Minimum: 299W

 

4x E5-4650, 512GB, 2x 10K RPM HDD, 2x 10Gb

  Reveal hidden contents

image.thumb.png.2db7a07b1148b81ea158f1664325ae22.png

Minimum: 183W

 

Based on the above high system idle is most likely system/platform configuration issue and if around 250W then massive improvements could be made with the current system changing only BIOS settings.

Yeah it's all about bios and OS settings for idle power states. See the first half of this post I have on it.

 

 

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, poochyena said:

how?

Beats me, I'm not an engineer.  Limiting clock speeds?  Turning off features unless they're called on?  I can imagine a number of ways, but I say that as an end user, not as a designer of processors.  All I know is that if AMD can have performance cores and the system draws under 100w when idle, then Intel - with their massive budget - should be able to, as well.  That's not a dig at Intel, it's just acknowledging that they are the behemoth in the room, and as such they should be able to do things that AMD just can't, simply based on their size.

 

Just to append to my previous post, I should also clarify that I have 3 HDDs and 2 SSDs, all of which are set to never spin down/sleep.  They run at full 100% of the time.  I also disable PCIe Link State Power Management in my High Performance profile (so it's not power throttling my RX 590 or sound card) and have my 'Minimum Processor State' set to 100%.  Despite all this, it still draws less than 100w at idle.

 

I can't say how Intel should do it, but you would think they could find a way with all the money they have to throw at engineering and production, without simply replicating a mobile platform solution in the desktop.  That's my opinion, take it for what you will.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Jito463 said:

Beats me, I'm not an engineer.  Limiting clock speeds?  Turning off features unless they're called on?

Why do you think desktop and laptop CPUs aren't the same? If CPUs can just magically reduce their power, why not put desktop CPUs in laptops?

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, poochyena said:

Why do you think desktop and laptop CPUs aren't the same? If CPUs can just magically reduce their power, why not put desktop CPUs in laptops?

I don't think they can just "magically" reduce power draw.  That's why laptop CPUs have fewer cores and run at a limited power draw.  Though there have certainly been instances of laptop manufacturers using a desktop CPU.  I'm just making the observation that if one company can do it, the other should be able to, as well.  I don't know how much clearer I can make my position.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Jito463 said:

That's why laptop CPUs have fewer cores and run at a limited power draw.

Do you think a laptop cpu with the same cores and unlocked power/voltage, with adequate cooling, would run the same as a desktop cpu?

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Jito463 said:

The better solution would be for Intel to just allow their CPUs to reduce their power draw at idle.  I don't mind them trying something new, I just don't think it's a viable long term solution.

Uhhh...what do you think they already do?  Speedstep and Speedshift downclock and lower voltage when under lower load, and there is power gating on core/cache when the OS signals that all cores don't need to be active.  But all the charts people make of 200W+ idle is just platform misconfiguration.  Set Windows to "High Performance" power profile and the CPU won't downclock.  Plus a half dozen settings in the BIOS that will disable lower power states.

 

Part of the "problem" is people want to have a ton of stuff running in the background which keeps waking the cores up though so that's where having the ARM big.LITTLE approach helps.  Shuffle all the random shit in the background to the higher perf/watt Atom cores.

Workstation:  14700nonK || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 13700K @ Stock || MSI Z690 DDR4 || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3060 RTX Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, poochyena said:

Do you think a laptop cpu with the same cores and unlocked power/voltage, with adequate cooling, would run the same as a desktop cpu?

Well to a degree they actually could. The CPU cores are actually the same in a Coffee Lake-S desktop CPU as in a Coffee Lake-H mobile CPU. Both have UHD630 iGPU, or some better Iris Pro options for the mobile ones. Both have the same IMC, at least bit width (dual channel) , and support DDR4-2666 official spec with Coffee Lake-H additionally supporting LPDDR3-2133. Both have 20 PCIe lanes with 4 being used for DMI to chipset.

 

Additionally the die on a 8700k and a 8750H is the exact same size and dimensions therefore identical and come from the same silicon wafer.

 

The only difference is the microcode tuning, voltage profiles and clock tables.

 

With all that said that does not mean a 8750H die could be loaded with a 8700K microcode and clocked to the same, it may not have the correct binning to be able to do that and would not be stable. As the saying goes, not all CPU dies are made equal, even though they are in a manufacturing sense literally are made the same.

 

So if you are experiencing very high idle power usage then it's the platform at fault, which means the motherboard. It's also likely fixable too. Idle will still be higher on a 8700K than a 8750H because they operate at different voltages and clock tables but you can still get a 8700K idle power down very low if you want to or try to.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AnonymousGuy said:

But all the charts people make of 200W+ idle is just platform misconfiguration.  Set Windows to "High Performance" power profile and the CPU won't downclock.

As I stated previously, I have mine set to high performance, with no downclocking of anything, and it still doesn't draw a lot of power.  It just makes me wonder what's so different with Intel chips that they would require so much extra power.  Obviously there's the process node differences, but that's still a massive disparity.

Link to comment
Share on other sites

Link to post
Share on other sites

if microsoft did not fixed the ryzen issue with windows 11 and the new ryzen 6000 serise is not that worth it....

 I am Going image.png.5f5f7e1b4276815e416bfb7aaefb0a77.png 

if ryzen SMUSHES intel in productivity than .........

I am Going image.jpeg.63de30133bbd27821ef9a6f144192ebe.jpeg

that is my opininon

quote me

Pls Mark a solution as a solution, would be really helpful.

BTW pls correct me, iam really stoobid at times.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Jito463 said:

No, they're creating completely separate cores to run on, requiring a complete revamp of the Windows scheduler.  I'm talking about limiting the power draw of the performance cores when idle.  Even if I set my 3800x to run at high performance - meaning 100% minimum for the CPU clocks - it still only draws 90-100w at idle from the wall.  So why is Intel drawing nearly 3x as much at idle (based on @Kisai and @leadeater's posts)?

In theory, it shouldn't take too much of a change since Intel is doing most of the heavy lifting this time around. Intel designed a scheduler on the die for these processors, so it really only requires that Windows recognize these processors hardware ID's and follow the low-level telemetry provided by the hardware scheduler:

https://coreteks.tech/articles/index.php/2021/07/02/the-alder-lake-hardware-scheduler-a-brief-overview/

Quote

To avoid frequent operation system updates, the patent describes that a hysteresis-based technique will be used in such way that bidirectional thresholds serve as a low pass filter, with different thresholds for transitions between two regions to remove frequent transients. Since hardware scheduler feedback is computed before performing the power budgeting based in the energy performance preferences, this ensures that even if a thread is scheduled in/out by the operation system the feedback will not change, thus providing the present power/thermal state of the system irrespective of the thread scheduled.

https://www.anandtech.com/show/16881/a-deep-dive-into-intels-alder-lake-microarchitectures/2

Quote

This new technology is a combined hardware/software solution that Intel has engineered with Microsoft focused on Windows 11. It all boils down to having the right functionality to help the operating system make decisions about where to put threads that require low latency vs threads that require high efficiency but are not time critical.

 

First you need a software scheduler that knows what it is doing. Intel stated that it has worked extensively with Microsoft to get what they want into Windows 11, and that Microsoft have gone above and beyond what Intel needed. This fundamental change is one reason why Windows 11 exists.

 

So it’s easy enough (now) to tell an operating system that different types of cores exist. Each one can have a respective performance and efficiency rating, and the operating system can migrate threads around as required. However the difference between Windows 10 and Windows 11 is how much information is available to the scheduler about what is running.

Windows 10 is currently handling this much better than Windows 11 on these CPU's, I am assuming something is wonky with their extra granular detection methods and it causes glitches in the switching states, but I imagine this will be patched out soon, likely with a microcode update on the processors themselves.

 

@my name is guru iam techI am refusing to quote you out of principal now. Also, Windows 11 already patched the AMD issue in Build 22000.282: https://support.microsoft.com/en-us/topic/october-21-2021-kb5006746-os-build-22000-282-preview-03190705-0960-4ba4-9ee8-af40bef057d3. You also need a chipset update from AMD to fully take advantage of the changes: https://www.amd.com/en/support/kb/faq/pa-400.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

These two statements seem in conflict with each other:

6 minutes ago, MageTank said:

In theory, it shouldn't take too much of a change since Intel is doing most of the heavy lifting this time around.

6 minutes ago, MageTank said:

First you need a software scheduler that knows what it is doing. Intel stated that it has worked extensively with Microsoft to get what they want into Windows 11, and that Microsoft have gone above and beyond what Intel needed. This fundamental change is one reason why Windows 11 exists.

It does say that it should be simpler now to tell the OS what kind of cores exist, it didn't say that it was simple to make the change.

10 minutes ago, MageTank said:

So it’s easy enough (now) to tell an operating system that different types of cores exist.

Again, just to qualify, I don't begrudge Intel trying something new.  Innovation is what drives the industry.  I just don't think this is a move that makes sense long term.  I have a feeling we'll look back at this similarly to how we look back at the Itanic....ahem, Itanium.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Jito463 said:

These two statements seem in conflict with each other:

It does say that it should be simpler now to tell the OS what kind of cores exist, it didn't say that it was simple to make the change.

Again, just to qualify, I don't begrudge Intel trying something new.  Innovation is what drives the industry.  I just don't think this is a move that makes sense long term.  I have a feeling we'll look back at this similarly to how we look back at the Itanic....ahem, Itanium.

I am not disqualifying Microsoft's efforts required to adapt to Intel's "Thread Director" hardware scheduler, just that it doesn't require that they themselves undertake the full scheduling of these CPU's from scratch. If what Intel is claiming is true, their hardware implementation should make it easier for Microsoft as their telemetry would be telling them exactly what to prioritize, rather than leave that up to Microsoft to extrapolate.

 

It's funny how they claim W11 was designed for this hardware scheduling change, but it somehow performs better in W10 based on my testing. It's almost as if something isn't working as intended, which is weird given the fact that W10 seems to ignore the granularity of Alder Lake's thread director reporting and uses a far simpler scheduling technique.

 

Quote

Again, just to qualify, I don't begrudge Intel trying something new.  Innovation is what drives the industry.  I just don't think this is a move that makes sense long term.  I have a feeling we'll look back at this similarly to how we look back at the Itanic....ahem, Itanium.

I don't disagree with this statement at all. Personally, I am against the idea of big-little in desktops at the moment. It's not because I have doubts that the OS can't handle the scheduling (even W10's basic scheduler appears to be doing decent for this task), it's more to do with the fact that we already have existing alternatives where efficiency is concerned. Dynamic frequency scaling already exists in low-power states. This isn't unique to Intel either, as AMD has a similar implementation that drops current in low-power states. It seems easier (with my limited engineering background) to refine this design rather than reinvent the wheel used on ARM processors and attempt to implement it in an x86 CISC environment and hope that it works. It also opens up the door for confusion by consumers that don't know any better when you are advertising a 16-core processor that is actually 8P+8E. Consumers already don't understand HTT/SMT and falsely equate them to processing cores, we really don't need any added confusion on top of this.

 

Still, it's petty to hold back innovation simply because the market may not understand it, even I am not that naive. I just don't want this to be Intel's "Bulldozer" incident (for a more modern reference that the kids of today might understand, lol).

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I see no reason why Intel would consume so much power in idle given they had SpeedStep for years which downclocks cores as well as downvolts them and puts parts of CPU in sleep (C states). I had 5820K and I had very low power consumption in idle because of this. I see no reason why modern much newer models wouldn't use this.

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, RejZoR said:

I see no reason why Intel would consume so much power in idle given they had SpeedStep for years which downclocks cores as well as downvolts them and puts parts of CPU in sleep (C states). I had 5820K and I had very low power consumption in idle because of this. I see no reason why modern much newer models wouldn't use this.

They can and do when properly configured. Some motherboards have that turned off be default in the name of associating stronger benchmark scores with the platform in an era of ever marginalized gains.

 

And then if you use intentionally inefficient settings for your OS, that can also hurt quite a bit.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Curufinwe_wins said:

They can and do when properly configured. Some motherboards have that turned off be default in the name of associating stronger benchmark scores with the platform in an era of ever marginalized gains.

 

And then if you use intentionally inefficient settings for your OS, that can also hurt quite a bit.

Exactly. Every single board I have tested since Coffee Lake have current limits set at 255A and power limits set at 4096W. This was NEVER the case prior to CFL and it is insane to think that this is a good idea. Ignoring power consumption entirely, these are safety features that are being treated like performance features. One wrong current spike at a high voltage from an amatuer overclocker and you have a recipe for a dead CPU or popped VRM.

 

With the original limits in-place, PL2 was basically law (assuming thermals were in-check). You couldn't exceed that value after a short burst and would throttle to ensure that was adhered to, even under some heavy AVX loads. It's what is currently used on notebooks quite effectively. It's also why undervolting is useful in those scenarios to increase power overhead without exceeding the limitations of these safety features.

 

Now the overclocker in me is perfectly happy to exceed these limits, but only on my own terms. It should never be a default that these protections are disabled as the ignorant will undoubtedly fall prey to this practice. I originally thought this was exclusive to MSI as that was the first brand of boards I encountered it on back when this first came about, but now I know it is impacting every board from every vendor, even the cheaper boards.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, MageTank said:

11 already patched the AMD issue

Just 1 out of 2 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I was looking for some laptops and after researching about Pentium N6000 CPU's I came across info that Pentium N6000 is actually using Atom cores, but based on Tremont architecture which is apparently highly efficient and are fabricated in Intel's 10nm+ process. After further research, this new Alder Lake is using Gracemont E-cores which are beefed up Tremont cores. They essentially bolted an Atom processor on their "existing" Core i3/i5/i7/i9 processors and added logic to manage workloads between them.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Mark Kaine said:

Just 1 out of 2 

Are you referring to the issue that was addressed with AMD's chipset driver? Does that really require an OS level patch if a driver addresses it? Doesn't seem like AMD or Microsoft is pushing for an OS-level fix either: https://www.amd.com/en/support/kb/faq/pa-400

Quote

Additional Information

As of October 21, 2021: Windows 11 update KB5006746 fully resolves the performance impact of Issue 1 described in this article. AMD Chipset Driver 3.10.08.506 fully resolves the performance impact of Issue 2 described in this article. AMD has verified that the performance and behavior of compatible AMD processors are working as intended on Windows 11 subsequent to the installation of these updates. AMD and Microsoft recommend that users promptly install this update on affected systems.

I imagine if they are not targeting specific CHID's, they could push a universal update through Windows Update and that'll take care of it for everyone that runs WU. We baked the chipset update into our AMD systems and haven't had any issues even after generalizing, so OEM's likely won't encounter any performance issues on their AMD systems, we got these updates quite early for our images.

image.png

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×