Jump to content

Why does Intel Big-Little consumed so much more power tham their previous architect?

Just feel like creating some discussion. I remember back in the day when Apple first switched from their single core Power PC to Intel, the reason was that dual core could handle the same workload with less power. So why Intel breaking up their big P core into several smaller E core resulted in increased power? Theoratically, shouldn't many smaller cores compute the work more efficiently than a single large one? 

 

Not that I am complaining about what they brought to the table, just curious. Don't think it's thier P core that consuming more because it doesn't consume that much in pure P core workload like gaming, it's only when the E were throw into the mix that heat and power start to go off the roof. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, e22big said:

Don't think it's thier P core that consuming more because it doesn't consume that much in pure P core workload like gaming, it's only when the E were throw into the mix that heat and power start to go off the roof. 

If "Efficient" cores used more power than "Performance" cores, it would go against the naming scheme dont you think?

mY sYsTeM iS Not pErfoRmInG aS gOOd As I sAW oN yOuTuBe. WhA t IS a GoOd FaN CuRVe??!!? wHat aRe tEh GoOd OvERclok SeTTinGS FoR My CaRd??  HoW CaN I foRcE my GpU to uSe 1o0%? BuT WiLL i HaVE Bo0tllEnEcKs? RyZEN dOeS NoT peRfORm BetTer wItH HiGhER sPEED RaM!!dId i WiN teH SiLiCON LotTerrYyOu ShoUlD dEsHrOuD uR GPUmy SYstEm iS UNDerPerforMiNg iN WarzONEcan mY Pc Run WiNdOwS 11 ?woUld BaKInG MY GRaPHics card fIX it? MultimETeR TeSTiNG!! aMd'S GpU DrIvErS aRe as goOD aS NviDia's YOU SHoUlD oVERCloCk yOUR ramS To 5000C18

 

Link to comment
Share on other sites

Link to post
Share on other sites

because there are SO MANY cores, and they (especially the P cores) are driven SO HARD. 

 

you dont solve a power problem with big-little, you create the possibility to solve it, but the solution is in the software. and given a lot of x86 software is made to just 'max out everything more fasts'  it makes sense that adding more 'efficient' cores still adds more power. it's like intel took a desktop cpu, and glued a laptop cpu to the side.

Link to comment
Share on other sites

Link to post
Share on other sites

In terms of performance per watt, the E cores are more efficient. The problem is that Intel pushes the cores to the limit on the higher SKUs, meaning they're well outside the efficiency window. The 12900K is basically pre-overclocked. If Intel ran it at the same all-core speed as the 12400, with a 4.2GHz all-core boost, and put the E cores at something like 3.3GHz, it would consume much less power while giving you most of the performance. But it wouldn't beat the 5950X anymore, which is clearly what Intel was targeting.

 

The issue isn't big.LITTLE, it's that Intel red-lines their chips now to compete with AMD. That's why they're so inefficient.

Link to comment
Share on other sites

Link to post
Share on other sites

Because there's just simply more cores. The i7 12700K has the 8 cores similar to previous gen i7's where their power consumption and heat output was alreay high and on top of that you add 4 more efficiency cores which themselves add more heat and power consumption into the mix.

7 minutes ago, e22big said:

So why Intel breaking up their big P core into several smaller E core resulted in increased power? Theoratically, shouldn't many smaller cores compute the work more efficiently than a single large one? 

In many cases, especially gaming no, definitely not. To take advantage of many cores you need applications that are heavily multi threaded of which there aren't many and in some cases it is difficult or impossible to split a task into smaller subtasks that can be completed separetely to optimise for multi threading. And creating a CPU with so many cores could also presents challenges and potentially lower yields.

Link to comment
Share on other sites

Link to post
Share on other sites

It doesn't consume much more power than previous architecture, they just tuned some of these CPUs even harder for more performance over efficiency, in part to beat AMD options in benchmarks.

Also it's important to differentiate between absolute power consumption and power efficiency, the 12900K uses ridiculous amounts of power, but it's way more efficient than the previous i9 CPUs because it's way faster.

Link to comment
Share on other sites

Link to post
Share on other sites

It still uses a lot of power because they're still brute-forcing their performance with the performance-cores. In terms of efficiency AMD still has a big advantage.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, YoungBlade said:

In terms of performance per watt, the E cores are more efficient. The problem is that Intel pushes the cores to the limit on the higher SKUs, meaning they're well outside the efficiency window. The 12900K is basically pre-overclocked. If Intel ran it at the same all-core speed as the 12400, with a 4.2GHz all-core boost, and put the E cores at something like 3.3GHz, it would consume much less power while giving you most of the performance. But it wouldn't beat the 5950X anymore, which is clearly what Intel was targeting.

 

The issue isn't big.LITTLE, it's that Intel red-lines their chips now to compete with AMD. That's why they're so inefficient.

 

Then I guess part of the reason is that Intel 10nm/7 is still not as efficient as TSMC. Because if they were on the same node, theoratically many smaller cores should be more efficient at the same task than a single big one, could be slower but not hotter. Alder Lake kind of surprised me in that they were actually much faster per piece of silicon while not as efficient which is the opposite of a conventional expectation. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, manikyath said:

because there are SO MANY cores, and they (especially the P cores) are driven SO HARD. 

 

you dont solve a power problem with big-little, you create the possibility to solve it, but the solution is in the software. and given a lot of x86 software is made to just 'max out everything more fasts'  it makes sense that adding more 'efficient' cores still adds more power. it's like intel took a desktop cpu, and glued a laptop cpu to the side.

But they aren't. Physically, the 12900K is just a 10 cores part, they replaced 2 big P cores with 8 smaller E cores, which increase core count to 16. But the silicon itself is still the size of a 10 cores die and shouldn't consume more raw power just because you are breaking those 2 of those cores into a smaller one.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, e22big said:

But the silicon itself is still the size of a 10 cores die and shouldn't consume more raw power just because you are breaking those 2 of those cores into a smaller one.

First, have you actually compared the efficiency of 12th gen to 10th and 11th gen? 12th gen actually is more efficient even with the E cores enabled (11900k topped out at just under 300w compared to the roughly 250w of the 12900k), it's just that it's so much thermally denser that it's way harder to actually cool. 

 

Also, splitting the E cores up increases the amount of operations that can happen in the set amount of space. It's kind of like how Hyper threading actually increases power consumption in workloads that take advantage of it by ~30%, just cranked up a bit more since the E cores better than hyperthreading for the workloads that take advantage of it.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, YoungBlade said:

In terms of performance per watt, the E cores are more efficient. The problem is that Intel pushes the cores to the limit on the higher SKUs, meaning they're well outside the efficiency window. The 12900K is basically pre-overclocked. If Intel ran it at the same all-core speed as the 12400, with a 4.2GHz all-core boost, and put the E cores at something like 3.3GHz, it would consume much less power while giving you most of the performance. But it wouldn't beat the 5950X anymore, which is clearly what Intel was targeting.

 

The issue isn't big.LITTLE, it's that Intel red-lines their chips now to compete with AMD. That's why they're so inefficient.

This makes sense. It is not the architecture per se, it is just that Intel chooses to run them super OC.

 

Think about it, not too long ago, users used to OC Intel chips to 5 Ghz, and that was considered a significant OC. Now, intel runs the 12900K up to 5.2 Ghz boost clock out of the box. They started doing that with the 9900K I believe which ran at 5.0 Ghz out of the box.  Sure, the architectures and process nodes have become more efficient in the past few gens, but I don't think they have fundamentally changed.

 

But that still wouldn't explain that they are more inefficient than the 9th- 11th gen. That is just an effect of the CPU's just having MORE maxed out (both P and E) cores, and that software / Windows doesn't know to ONLY use the E cores for lighter tasks?

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, maartendc said:

But that still wouldn't explain that they are more inefficient than the 9th- 11th gen. That is just an effect of the CPU's just having MORE maxed out (both P and E) cores, and that software / Windows doesn't know to ONLY use the E cores for lighter tasks?

The 12th gen parts are way more efficient than the previous Intel parts. It's just the red-lining issue. If you just turn on MCE for the 10900K, that puts you in the ballpark of 12900K for power consumption, but the 12900K is way faster.

 

1 hour ago, e22big said:

 

Then I guess part of the reason is that Intel 10nm/7 is still not as efficient as TSMC. Because if they were on the same node, theoratically many smaller cores should be more efficient at the same task than a single big one, could be slower but not hotter. Alder Lake kind of surprised me in that they were actually much faster per piece of silicon while not as efficient which is the opposite of a conventional expectation. 

With the E cores, it can be just as efficient as TSMC. If you look at the 12600K, it can match the 5800X in many multicore workloads while consuming about the same amount of power. To me, this is a fair fight, because AMD also is inefficient with the 5800X, likely so that it matched the 10900K at release.

 

The 12900K is the big problem, where Intel stopped caring about efficiency in any capacity to match the 5950X in benchmarks. But that fight looks particularly awful because AMD tunes the 5950X for efficiency. So the result is understandably lopsided.

Link to comment
Share on other sites

Link to post
Share on other sites

First let's clarify what is or isn't an overclock, as that is important. Overclocking is running outside specification so should be disregarded for this discussion, as you can make it as bad as you like. On Intel, adjusting power limit is not an overclock, but anything directly tinkering with voltage or clocks is, including MCE. On AMD, pretty much everything is an overclock, including PBO. A big difference between them is AMD had an enforced stock power limit, at least up to Zen 2: PPT. Note many higher end enthusiast mobos may enable MCE (or PBO) by default, so they're the ones doing the overclocking.

 

With that out of the way, we have other factors to consider. Long time overclockers will know of the voltage or power curve. Note curve, it isn't a straight line. The more clock you want, the more voltage you need, the more power it uses. Efficiency generally goes down as clock goes up, at least for higher end performance. So you choose and balance both your peak performance and power efficiency. There will be give an take there. Intel traditionally not requiring system builders to work to a power limit has resulted in them showing much bigger values here. Conversely, AMD CPUs could perform better with a relaxed power limit, depending on the task.


I think I've seen testing somewhere comparing AMD and Intel perf/watt through the range but I don't recall the results. Will see if I can find it again. It is more interesting looking at equal power limit performance comparisons. I last did this myself and it should be still on this forum somewhere, where I put a 3600 vs 8086k. 6 cores of the time. 8086k could take the absolute performance lead, but at much higher power usage.

 

Don't underestimate the impact of changes to fab process. Look at Skylake and Kaby Lake. Pretty much identical architecturally, but the improved process on Kaby Lake gave it on average about 100 MHz over Skylake. This isn't just marketing, if you overclock them to the limit, you can also see the average clocks rise similarly. Likewise when Coffee Lake came out that gained about another 100 MHz on average. AMD showed similar within a product generation. I bought Zen 2 at launch, and you had to overclock them to an inch of their lives to get much past 4.2 GHz all core stable. A year or so later, reports were that was being easily beat as routine.

 

In all the above, I'm assuming like-for-like cooling, and nothing extreme. Better cooling can result in better performance in general.

 

So back to the thread topic with all that background, without personal testing, chances are both types of 12th gen cores are more efficient than previous when comparing their operating curves, but we can still go pretty far up the curve. Still need a comparison point vs AMD.

 

Edit: found my old testing. Now to look for similar testing by others for recent gens:

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, RONOTHAN## said:

First, have you actually compared the efficiency of 12th gen to 10th and 11th gen? 12th gen actually is more efficient even with the E cores enabled (11900k topped out at just under 300w compared to the roughly 250w of the 12900k), it's just that it's so much thermally denser that it's way harder to actually cool. 

 

Also, splitting the E cores up increases the amount of operations that can happen in the set amount of space. It's kind of like how Hyper threading actually increases power consumption in workloads that take advantage of it by ~30%, just cranked up a bit more since the E cores better than hyperthreading for the workloads that take advantage of it.

That I agree, 12th gen is actually much more efficient than Intel previous gen, they just run it higher for added performance and there's nothing wrong with it. 

 

But I don't, given the nature of E core, it should run more efficiently compared to all big core design in AMD as well. Or maybe not but shouldn't have to run this way up to catch up. They practically beat AMD 12 cores with a 9 cores design in performance, and 9 cores shouldn't use more power than 12. Maybe it's just like you've said that splitting P core up into E does increase operations and thus leading to more power consumption. But if so, why Mac swithing from PowerPC to Intel actually decrease power?

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, e22big said:

But if so, why Mac swithing from PowerPC to Intel actually decrease power?

That's before my time in this space. I'd assume it's because Intel designed their CPUs more efficiently than IBM, but haven't done enough research to know (again, before my time, and not an era of computing I am particularly interested in). 

 

15 minutes ago, e22big said:

9 cores shouldn't use more power than 12.

It does when Intel 7 (Intel 12nm) clocks higher than TSMC 7nm, but is about as power efficient per clock. Intel's CPUs are clocked higher and use more voltage, therefore draw more power. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, RONOTHAN## said:

That's before my time in this space. I'd assume it's because Intel designed their CPUs more efficiently than IBM, but haven't done enough research to know (again, before my time, and not an era of computing I am particularly interested in). 

 

It does when Intel 7 (Intel 12nm) clocks higher than TSMC 7nm, but is about as power efficient per clock. Intel's CPUs are clocked higher and use more voltage, therefore draw more power. 

Actually, now that I think about, with Big-Little's probably really more efficient to AMD all big core design. I mean, if Intel use P+E design with the same physical P core space of AMD all big core design, and clock everything way down to the same or even lower wattage, they would have still beaten AMD in multicore design. Just not as versatile for gaming and other freqency sensitive task. 

 

I've come to realise that Intel laptop part like Core i9 12900HK with 6P + 8E core is basically an equivalent to AMD 8 all big core part, and it beats Ryzen 5700X at multicore performance while drawing only 115w turbo. Still more than 80-90w turbo on a Ryzen chip but a lot closer.

 

Link to comment
Share on other sites

Link to post
Share on other sites

As example to show what is happening with the 12900K power consumption, look at the 12700K at stock and overclocked.

At stock it uses 160W and stays around 4.6Ghz for the P-cores, overclocked it uses 220W at 5Ghz for the P-cores.

And that increase of power results in ~3% better performance. The 12900K is basically doing what the overclocked 12700K does, it increases the power consumption by a massive amount for that last little bit of performance.

Here is a example of what happens when you power limit the 12900K:

Spoiler

https://tpucdn.com/review/intel-core-i9-12900k-alder-lake-tested-at-various-power-limits/images/cinebench-multi.png

blender.png

See how the score barely changes when going from 190W to 241W? Unfortunately Techpowerup didn't do a step at ~160W, but my guess is that it would fall slightly behind the 5950X around 25000 points on Cinebench and 100s on Blender, but Intel wanted the 12900K to be on the top, so they cranked the power consumption up for the last drop of performance.

 

The same can be seen on the AMD side, but Ryzen reaches that point where that little bit of performance require almost double the power much sooner. The 5800X loses only ~7% performance when limited to 80W or ~12% when limited to 65W compared to the stock 140W.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×