Jump to content

Intel 12th Gen Alder Lake T-Series 35W CPUs Reportedly Hit 4.9 GHz

Lightwreather

Summary

We've already seen the alleged specifications for the K-series chips, and today, FanlessTech has shared the potential specification for the T-series parts.

Quotes

Quote

The Core i9-12900T allegedly features the same 8+8 configuration as the Core i9-12900K. However, Intel would have to gimp the operating clocks to keep it within the 35W thermal envelope. According to FanlessTech, the Core i9-12900T comes with a 4.9 GHz boost clock, which is only 300 MHz lower than its K-series counterpart. The Core i9-12900T in all likelihood has a lower base clock, but FanlessTech didn't share that value.

Apparently, the Core i7-12700T could arrive with a 4.7 GHz boost clock. The rumored boost clock speed for the Core i7-12700K is 5 GHz, so it seems to have the same 300 MHz reduction as the Core i9 SKU.

The Core i5 models would take the biggest performance hit. The Core i5-12600K, which has a 125W TDP, reportedly sports six Golden Cove cores and four Gracemont cores. With the Core i5-12600T, however, it seems that Intel has eliminated the Gracemont cores all together. In addition to the 300 MHz lower boost clock, the Core i5-12600T also has a lower total core count (six as opposed to ten).

 

My thoughts

Well, this is interesting. Hopefully this means that Intel chips won't require a Fusion reactor in order to run (/s). But in all seriousness, Clock speed≠ Performance, however considering these chips use the same µarch and the top of the line for the T series is just 300MHz lower than the K series, it might mean they'll have pretty similar performance. However, as with all rumours and leaks take these with a grain of salt until official launch (which might be soon), and take benchmarks with a grain of salt as well until independant reviews appear.

(Tho, it still bugs me, why there aren't any gracemont cores for the i5-12600T)

Sources

Tom's hardware

Fanless tech

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, J-from-Nucleon said:

4.9 GHz boost clock, which is only 300 MHz lower than its K-series counterpart.

The question is: How long?

 

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

I am human. I'm scared of the dark, and I get toothaches. My name is Frill. Don't pretend not to see me. I was born from the two of you.

Link to comment
Share on other sites

Link to post
Share on other sites

Clock speed ≠ performance only applies between different architectures, within the same architecture it's directly comparable. Processors with the same number of cores of each type should perform the same at the same clock speed. If they don't, Intel broke out the voodoo magic gimping for the T series.

¯\_(ツ)_/¯

 

 

Desktop:

Intel Core i7-11700K | Noctua NH-D15S chromax.black | ASUS ROG Strix Z590-E Gaming WiFi  | 32 GB G.SKILL TridentZ 3200 MHz | ASUS TUF Gaming RTX 3080 | 1TB Samsung 980 Pro M.2 PCIe 4.0 SSD | 2TB WD Blue M.2 SATA SSD | Seasonic Focus GX-850 Fractal Design Meshify C Windows 10 Pro

 

Laptop:

HP Omen 15 | AMD Ryzen 7 5800H | 16 GB 3200 MHz | Nvidia RTX 3060 | 1 TB WD Black PCIe 3.0 SSD | 512 GB Micron PCIe 3.0 SSD | Windows 11

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, SorryClaire said:

The question is: How long?

 

porb lasts for 30 seconds if you get what i mean

Main:

  • CPU
    10700k 5ghz All core 47 ring ratio 1.275v
  • Motherboard
    MSI Z590 A Pro
  • RAM
    4x8 viper steel samsung bdie: 4200-17-17-30
  • GPU
    Gigabyte RTX 3080 Gaming Oc
  • Case
    O11 air mini
  • Storage
    970 Evo Plus 500gb OS, Sn550 1tb, 860 evo 500gb, 2tb MX500
  • PSU
    RM750x (2018)
  • Display(s)
    Acer Nitro vg272up, Kogan 24 1080 120hz
  • Cooling
    arctic 280aio, EK M.2 NVMe Heatsink on 970 evo plus
     
    Second: 
    Cpu: i5-8400
    Cooling: ID-Cooling Frostflow 120x
    Ram: 2x8gb 2666 c16 hyperx fury, tuned the absolute balls out of it but def not stehble.
    Mobo: Asus Prime H310M-K
    Gpu: Igpu
    Case: Coolermaster MB311L
     
Link to comment
Share on other sites

Link to post
Share on other sites

Intel's TDP numbers have long been little more than a suggestion, in reality to actually get the chip to perform at these speeds consistently you'll most likely need better cooling than a 35W TDP would imply.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Sauron said:

Intel's TDP numbers have long been little more than a suggestion, in reality to actually get the chip to perform at these speeds consistently you'll most likely need better cooling than a 35W TDP would imply.

That's the thing about boost clocks and TDP they are different specs saying different things, I don't know where this idea that these two should or have been directly tied to each other came from. I can only assume when motherboards started doing MCE and then nobody really noticed the power difference because there was only 4 cores at the time.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

That's the thing about boost clocks and TDP they are different specs saying different things, I don't know where this idea that these two should or have been directly tied to each other came from. I can only assume when motherboards started doing MCE and then nobody really noticed the power difference because there was only 4 cores at the time.

At some point I think it indicated a peak value, since then they started to just pick the number that looked better...

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Sauron said:

At some point I think it indicated a peak value, since then they started to just pick the number that looked better...

It's actually always been for base clock. Before Core 2 Turbo boost did not exist.

 

Pentium 4 (only has base clocks and no C States)

image.png.5b926d87c9ade738501a716a1ef72772.png

 

Then in Nehalem Intel introduced Turbo boost, TDP was then and now still based on base clock. It's literally never been anything else ever, not peak, not some weird abstracted value (looking at you AMD Zen).

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, SorryClaire said:

The question is: How long?

 

And also - how heavy a workload are we talking about?

PC Setup: 

HYTE Y60 White/Black + Custom ColdZero ventilation sidepanel

Intel Core i7-10700K + Corsair Hydro Series H100x

G.SKILL TridentZ RGB 32GB (F4-3600C16Q-32GTZR)

ASUS ROG STRIX RTX 3080Ti OC LC

ASUS ROG STRIX Z490-G GAMING (Wi-Fi)

Samsung EVO Plus 1TB

Samsung EVO Plus 1TB

Crucial MX500 2TB

Crucial MX300 1.TB

Corsair HX1200i

 

Peripherals: 

Samsung Odyssey Neo G9 G95NC 57"

Samsung Odyssey Neo G7 32"

ASUS ROG Harpe Ace Aim Lab Edition Wireless

ASUS ROG Claymore II Wireless

ASUS ROG Sheath BLK LTD'

Corsair SP2500

Beyerdynamic TYGR 300R + FiiO K7 DAC/AMP

RØDE VideoMic II + Elgato WAVE Mic Arm

 

Racing SIM Setup: 

Sim-Lab GT1 EVO Sim Racing Cockpit + Sim-Lab GT1 EVO Single Screen holder

Svive Racing D1 Seat

Samsung Odyssey G9 49"

Simagic Alpha Mini

Simagic GT4 (Dual Clutch)

CSL Elite Pedals V2

Logitech K400 Plus

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, SorryClaire said:

The question is: How long?

True

1 hour ago, BobVonBob said:

Clock speed ≠ performance only applies between different architectures, within the same architecture it's directly comparable. Processors with the same number of cores of each type should perform the same at the same clock speed. If they don't, Intel broke out the voodoo magic gimping for the T series.

I think I mentioned that here:

1 hour ago, J-from-Nucleon said:

however considering these chips use the same µarch and the top of the line for the T series is just 300MHz lower than the K series, it might mean they'll have pretty similar performance.

 

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Sauron said:

Intel's TDP numbers have long been little more than a suggestion, in reality to actually get the chip to perform at these speeds consistently you'll most likely need better cooling than a 35W TDP would imply.

The meaning of TDP and boost clocks has been misunderstood by the enthusiast niche for a while, and for better or worse people think/want it to mean something it doesn't.

 

The ultra-short version is if you have a cooler rated at TDP, you get at least base clock. Clocking above that is opportunistic. It is not directly indicative of power consumption, which is the most common misunderstanding. AMD's application of TDP is similar to Intel's, so it is not just an Intel thing.

 

57 minutes ago, leadeater said:

That's the thing about boost clocks and TDP they are different specs saying different things, I don't know where this idea that these two should or have been directly tied to each other came from. I can only assume when motherboards started doing MCE and then nobody really noticed the power difference because there was only 4 cores at the time.

Up to Kaby Lake peak "stock" (not MCE) power consumption with Prime95 like workloads was pretty close to TDP. It was around Coffee Lake where peak power draw started to significantly exceed TDP when run power unlimited - which is NOT an overclock condition, it is Intel giving system builders the choice of performance vs power usage. MCE is an overclock and TDP is meaningless if you have that on.

 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

It's actually always been for base clock. Before Core 2 Turbo boost did not exist.

 

Pentium 4 (only has base clocks and no C States)

image.png.5b926d87c9ade738501a716a1ef72772.png

 

Then in Nehalem Intel introduced Turbo boost, TDP was then and now still based on base clock. It's literally never been anything else ever, not peak, not some weird abstracted value (looking at you AMD Zen).

While this is true, up until Haswell or so the tdp suggestions were highly conservative to the point where even at boost (outside of particularly heavy loads) they managed to stay within the tdp references, which has changed  by a huge amount in the time since.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, porina said:

The meaning of TDP and boost clocks has been misunderstood by the enthusiast niche for a while, and for better or worse people think/want it to mean something it doesn't.

 

The ultra-short version is if you have a cooler rated at TDP, you get at least base clock. Clocking above that is opportunistic. It is not directly indicative of power consumption, which is the most common misunderstanding. AMD's application of TDP is similar to Intel's, so it is not just an Intel thing.

 

Up to Kaby Lake peak "stock" (not MCE) power consumption with Prime95 like workloads was pretty close to TDP. It was around Coffee Lake where peak power draw started to significantly exceed TDP when run power unlimited - which is NOT an overclock condition, it is Intel giving system builders the choice of performance vs power usage. MCE is an overclock and TDP is meaningless if you have that on.

 

Should have read this before making my comment hahaha. I was going to suggest Haswell as the demarcation line for basically always being under tdp reference, but I would also agree with the suggestion that coffee lake was the first era where it was significantly above.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

bruh my cpu can do 6GHz for a whole 1.25 seconds i have such a beast machine

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, J-from-Nucleon said:

With the Core i5-12600T, however, it seems that Intel has eliminated the Gracemont cores all together

Everybody talking about boost clocks, meanwhile this part confuses me the most. "Gaming" 'K' CPUs get power-saving cores yet locked CPUs, obviously targeted at casual users doing casual things, don't get them? I thought their primary implementation would be low end CPUs so they would end up in some offices etc.,
but this doesn't make any sense to me.

 

Why have a hybrid CPU with potential scheduling inefficiencies (as everything new) on an unlocked part focused on very inefficient overclocking and higher base clocks/TDP??? Anyone has ideas?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Ydfhlx said:

Everybody talking about boost clocks, meanwhile this part confuses me the most. "Gaming" 'K' CPUs get power-saving cores yet locked CPUs, obviously targeted at casual users doing casual things, don't get them? I thought their primary implementation would be low end CPUs so they would end up in some offices etc.,
but this doesn't make any sense to me.

 

Why have a hybrid CPU with potential scheduling inefficiencies (as everything new) on an unlocked part focused on very inefficient overclocking and higher base clocks/TDP??? Anyone has ideas?

In a desktop for the end user it doesn’t matter if you have low power cores or not. But if the OS can handle it well you don’t need to worry about sheduling becase for more stuff than you would believe the small cores are sufficient and you do not notice it at all what cores you are using unless you monitor it. 
 

This is my experience as an M1 Mac Mini owner. I shit you not, 90 % of the CPU time in my workflow is on the low powered cores. (Excel, AutoCAD (rosetta), MS teams (rosetta), mail, web browser, PDF editing etc). 

Lots of CPU time on high power cores is something I only see when playing the only game I play on the computer (civ 6) or running synthetic shit like cinebench. 

 


 



 

Link to comment
Share on other sites

Link to post
Share on other sites

Intel's been doing 4.9Ghz and 5Ghz for a decade, bring up a 6Ghz stock chip guys.. 

CPU | AMD Ryzen 7 7700X | GPU | ASUS TUF RTX3080 | PSU | Corsair RM850i | RAM 2x16GB X5 6000Mhz CL32 MOTHERBOARD | Asus TUF Gaming X670E-PLUS WIFI | 
STORAGE 
| 2x Samsung Evo 970 256GB NVME  | COOLING 
| Hard Line Custom Loop O11XL Dynamic + EK Distro + EK Velocity  | MONITOR | Samsung G9 Neo

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, porina said:

The meaning of TDP and boost clocks has been misunderstood by the enthusiast niche for a while, and for better or worse people think/want it to mean something it doesn't.

 

The ultra-short version is if you have a cooler rated at TDP, you get at least base clock. Clocking above that is opportunistic. It is not directly indicative of power consumption, which is the most common misunderstanding. AMD's application of TDP is similar to Intel's, so it is not just an Intel thing.

TDP is a measurement of heat output more or less.

But i think that it's better to use power consumption rather than TDP which can be manipulated with different ambient temperatures.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, J-from-Nucleon said:

however considering these chips use the same µarch and the top of the line for the T series is just 300MHz lower than the K series, it might mean they'll have pretty similar performance.

So 1 second. Nice.

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

I am human. I'm scared of the dark, and I get toothaches. My name is Frill. Don't pretend not to see me. I was born from the two of you.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Vishera said:

TDP is a measurement of heat output more or less.

But i think that it's better to use power consumption rather than TDP which can be manipulated with different ambient temperatures.

Well we don't actually have to guess, not when the actual specs and white paper is published. When we get that we'll get the PL1 and PL2 information and the per core and all core clocks, turbo tables etc etc with that.

 

As long as you are running stock Intel reference parameters you can pull this information for any Intel CPU and know how much power it will use, for how long and all number of wonderful information. The problem is when it comes to gaming motherboards it's rare that they run with the reference parameters, the same is actually true of laptops as well.

 

If everyone could learn to talk about PL1 and PL2, which are actually configurable values, then we could throw out TDP and never talk about it again like we should.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Vishera said:

TDP is a measurement of heat output more or less.

But i think that it's better to use power consumption rather than TDP which can be manipulated with different ambient temperatures.

TDP isn't a measurement, it is a specification. It is effectively the minimum cooler you need to get at least the base performance level sustained.

 

Power consumption is the measurement you can manipulate, more so on Intel than AMD due to their different enforcement policies at "stock" running. Intel gives the system builder the choice of where they want to run the power limit, including practically unlimited, without breaking warranty. AMD picks one for you, which caps power usage but also caps performance. That power limit also isn't the same as TDP.

 

33 minutes ago, leadeater said:

The problem is when it comes to gaming motherboards it's rare that they run with the reference parameters, the same is actually true of laptops as well.

With such mobos there is a historic goal of getting the most performance out as possible. If running unlimited power limit is allowed by the CPU manufacturer, guess what they're going to set?

 

I'm not so sure about laptops. Those are thermally constrained systems, and all but the most extreme models will have to balance size and cooling capability. The cooling solution can be at, above or below TDP, and the CPU run accordingly. That is generally allowed by Intel, since their reference values are more of a serving suggestion than a requirement.

 

33 minutes ago, leadeater said:

If everyone could learn to talk about PL1 and PL2, which are actually configurable values, then we could throw out TDP and never talk about it again like we should.

Suggested PL1 = TDP, however the name change might finally get it through to some people that TDP is not by itself a power usage indicator. On higher (overclocking chipset) mobos, they are typically set PL1 = PL2 = unlimited by default. Cheaper non-OC boards are more likely to set PL1 and PL2 values to Intel suggestions, in part presumably because they're built cheaper without the expectation of massive power delivery needs.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, porina said:

I'm not so sure about laptops. Those are thermally constrained systems, and all but the most extreme models will have to balance size and cooling capability. The cooling solution can be at, above or below TDP, and the CPU run accordingly. That is generally allowed by Intel, since their reference values are more of a serving suggestion than a requirement.

Laptops often lower the PL1 or Intel offers a default TDP down configuration which is just a lower PL1. So if you're looking at laptops there is the issue of two laptops using the same CPU model but one has significantly higher PL1, 35W vs 25W, but no where is this actually stated in any of the laptop specs, it's actually really bad.

 

You can usually tell though, if one is a thin and light and the other is more standard size you can usually guess which is going to perform better or has the higher PL1 but this is not always true.

 

PL2 values can also be different too.

 

Laptops man, the gold standard in having to test the literal device to know how it will perform, CPU model isn't great when comparing like for like.

 

Also note about PL2, this one if set sufficiently high enough becomes less of an upper limit and what the power usage will be will depend on workload, instruction set used, thread utilization etc. My PL2 is 1000W, safe to say when it boosts it's not using 1000W lol. I'd really like to see a lot more power scaling benchmarks, from say 10W up to unlimited incrementing by 10W across from say Sandy Bridge to now. I think that would be really interesting. 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, leadeater said:

Also note about PL2, this one if set sufficiently high enough becomes less of an upper limit and what the power usage will be will depend on workload, instruction set used, thread utilization etc. My PL2 is 1000W, safe to say when it boosts it's not using 1000W lol.

That's why I used the term "practically unlimited" previously. A limit isn't much of a limit if you're not going to reach it. AMD's power limits are set low enough that under most multithread loads, you're likely to hit it. Depending on the stress of the load, you'll instead see the clocks go down, rather than the power go up. On Zen, a heavy load similar to Prime95 can run many hundreds of MHz lower than a light workload like Cinebench. Running unlimited power on Intel we go the other way, the clocks remain largely unaffected, but power consumption varies with load. This gives a perception problem when it is poorly presented by the tech community. If you run an AVX-512 load without limit, you can get massive power draw. Many look at the power draw, neglecting that it is doing a LOT of work at the time. In my own testing I found Rocket Lake is about the same perf/watt as Comet Lake. Not surprising, given the same process tech, but the perf uplift is there and can be quite significant.

 

13 minutes ago, leadeater said:

I'd really like to see a lot more power scaling benchmarks, from say 10W up to unlimited incrementing by 10W across from say Sandy Bridge to now. I think that would be really interesting. 

I did a limited test in the past, which is somewhere on this forum but I can't find it again. It was much more limited, I think it was Zen 2 vs something-lake. No surprises, under typical running conditions Zen 2 was more power efficient. The slope of the curve was more interesting as AMD's was steeper. It remained more efficient than Intel at all tested points, but it was getting worse faster on the high end. I think that was an indication of its lower clock wall. At the other end, it scaled down better, making it more interesting for mobile devices.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, porina said:

AMD's power limits are set low enough that under most multithread loads, you're likely to hit it. Depending on the stress of the load, you'll instead see the clocks go down, rather than the power go up. On Zen, a heavy load similar to Prime95 can run many hundreds of MHz lower than a light workload like Cinebench.

I think much of this is also due to the voltage, clocks and temperature scaling of the Zen arch and TSMC 7nm as well. I'm sure if the CPUs could run more stable at the higher powers then power usage would be more Intel like, however Ryzen is extremely temperature dependent than it is vcore dependent.

 

Intel CPUs don't mind as much running hot so if you want to run higher clocks you for the most part just have to increase vcore to get it stable and then have enough cooling to stop Tj spikes that'll crash or lock the system.

 

AMD ~= 95% core temp limited

Intel  ~= 80% vcore limited

 

Or something like that.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Spindel said:

This is my experience as an M1 Mac Mini owner. I shit you not, 90 % of the CPU time in my workflow is on the low powered cores. (Excel, AutoCAD (rosetta), MS teams (rosetta), mail, web browser, PDF editing etc). 

Yeh, but the post you quoted said specifically for gaming... tbh I dont *really* know what's the point of "little" cores but I do have the feeling those cpus are going to suck for gaming in general,  coupled with weird DDR5 shenanigans (latency issues) its probably gonna suck even more. We will see when there a proper benchmarks , but this is what it looks like to me currently. 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×