Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Intel 12th Gen Alder Lake T-Series 35W CPUs Reportedly Hit 4.9 GHz

Summary

We've already seen the alleged specifications for the K-series chips, and today, FanlessTech has shared the potential specification for the T-series parts.

Quotes

Quote

The Core i9-12900T allegedly features the same 8+8 configuration as the Core i9-12900K. However, Intel would have to gimp the operating clocks to keep it within the 35W thermal envelope. According to FanlessTech, the Core i9-12900T comes with a 4.9 GHz boost clock, which is only 300 MHz lower than its K-series counterpart. The Core i9-12900T in all likelihood has a lower base clock, but FanlessTech didn't share that value.

Apparently, the Core i7-12700T could arrive with a 4.7 GHz boost clock. The rumored boost clock speed for the Core i7-12700K is 5 GHz, so it seems to have the same 300 MHz reduction as the Core i9 SKU.

The Core i5 models would take the biggest performance hit. The Core i5-12600K, which has a 125W TDP, reportedly sports six Golden Cove cores and four Gracemont cores. With the Core i5-12600T, however, it seems that Intel has eliminated the Gracemont cores all together. In addition to the 300 MHz lower boost clock, the Core i5-12600T also has a lower total core count (six as opposed to ten).

 

My thoughts

Well, this is interesting. Hopefully this means that Intel chips won't require a Fusion reactor in order to run (/s). But in all seriousness, Clock speed≠ Performance, however considering these chips use the same µarch and the top of the line for the T series is just 300MHz lower than the K series, it might mean they'll have pretty similar performance. However, as with all rumours and leaks take these with a grain of salt until official launch (which might be soon), and take benchmarks with a grain of salt as well until independant reviews appear.

(Tho, it still bugs me, why there aren't any gracemont cores for the i5-12600T)

Sources

Tom's hardware

Fanless tech

Link to post
Share on other sites
16 minutes ago, J-from-Nucleon said:

4.9 GHz boost clock, which is only 300 MHz lower than its K-series counterpart.

The question is: How long?

 

Remember to press quote to get a response from someone! | Check people's edited comments! Press F5 to do so! | Remember to press that check mark on the right answer that solve your problem! | React to your heart, not peer pressure!

 

"Wow, you want me to say ara ara? Really?"

- Clock Girl

Self Introduction and Current Rig

 

🏳️‍⚧️ Rights

Link to post
Share on other sites

Clock speed ≠ performance only applies between different architectures, within the same architecture it's directly comparable. Processors with the same number of cores of each type should perform the same at the same clock speed. If they don't, Intel broke out the voodoo magic gimping for the T series.

¯\_(ツ)_/¯

 

 

Desktop:

Intel Core i7-11700K | Noctua NH-D15S chromax.black | ASUS ROG Strix Z590-E Gaming WiFi  | 32 GB G.SKILL TridentZ 3200 MHz | ASUS TUF Gaming RTX 3080 | 1TB Samsung 980 Pro M.2 PCIe 4.0 SSD | 2TB WD Blue M.2 SATA SSD | Seasonic Focus GX-850 Fractal Design Meshify C Windows 10 Pro

 

Laptop:

HP Omen 15 | AMD Ryzen 7 5800H | 16 GB 3200 MHz | Nvidia RTX 3060 | 1 TB WD Black PCIe 3.0 SSD | 512 GB Micron PCIe 3.0 SSD | Windows 10

Link to post
Share on other sites
4 minutes ago, SorryClaire said:

The question is: How long?

 

porb lasts for 30 seconds if you get what i mean

  • CPU
    10700k 5ghz all core
    1.275, 0 Avx 47 Ring Ratio 
  • Motherboard
    MSI Z590 A Pro
  • RAM
    Hyperx Fury 2x8gb 2666mhz
  • GPU
    RTX 3080 Gaming Oc Revision 2
  • Case
    CoolerMaster H500
    2x 200mm front, 200mm top, a12x15 back
  • Storage
    970 Evo Plus 500gb, Sn550 1tb, 2tb barracuda, 860 evo 500gb, wd 5400rpm 1tb
  • PSU
    RM750x
Link to post
Share on other sites

Intel's TDP numbers have long been little more than a suggestion, in reality to actually get the chip to perform at these speeds consistently you'll most likely need better cooling than a 35W TDP would imply.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

What is scaling and how does it work? Asus PB287Q unboxing! Console alternatives :D Watch Netflix with Kodi on Arch Linux Sharing folders over the internet using SSH Beginner's Guide To LTT (by iamdarkyoshi)

Sauron'stm Product Scores:

Spoiler

Just a list of my personal scores for some products, in no particular order, with brief comments. I just got the idea to do them so they aren't many for now :)

Don't take these as complete reviews or final truths - they are just my personal impressions on products I may or may not have used, summed up in a couple of sentences and a rough score. All scores take into account the unit's price and time of release, heavily so, therefore don't expect absolute performance to be reflected here.

 

-Lenovo Thinkpad X220 - [8/10]

Spoiler

A durable and reliable machine that is relatively lightweight, has all the hardware it needs to never feel sluggish and has a great IPS matte screen. Downsides are mostly due to its age, most notably the screen resolution of 1366x768 and usb 2.0 ports.

 

-Apple Macbook (2015) - [Garbage -/10]

Spoiler

From my perspective, this product has no redeeming factors given its price and the competition. It is underpowered, overpriced, impractical due to its single port and is made redundant even by Apple's own iPad pro line.

 

-OnePlus X - [7/10]

Spoiler

A good phone for the price. It does everything I (and most people) need without being sluggish and has no particularly bad flaws. The lack of recent software updates and relatively barebones feature kit (most notably the lack of 5GHz wifi, biometric sensors and backlight for the capacitive buttons) prevent it from being exceptional.

 

-Microsoft Surface Book 2 - [Garbage - -/10]

Spoiler

Overpriced and rushed, offers nothing notable compared to the competition, doesn't come with an adequate charger despite the premium price. Worse than the Macbook for not even offering the small plus sides of having macOS. Buy a Razer Blade if you want high performance in a (relatively) light package.

 

-Intel Core i7 2600/k - [9/10]

Spoiler

Quite possibly Intel's best product launch ever. It had all the bleeding edge features of the time, it came with a very significant performance improvement over its predecessor and it had a soldered heatspreader, allowing for efficient cooling and great overclocking. Even the "locked" version could be overclocked through the multiplier within (quite reasonable) limits.

 

-Apple iPad Pro - [5/10]

Spoiler

A pretty good product, sunk by its price (plus the extra cost of the physical keyboard and the pencil). Buy it if you don't mind the Apple tax and are looking for a very light office machine with an excellent digitizer. Particularly good for rich students. Bad for cheap tinkerers like myself.

 

 

Link to post
Share on other sites
13 minutes ago, Sauron said:

Intel's TDP numbers have long been little more than a suggestion, in reality to actually get the chip to perform at these speeds consistently you'll most likely need better cooling than a 35W TDP would imply.

That's the thing about boost clocks and TDP they are different specs saying different things, I don't know where this idea that these two should or have been directly tied to each other came from. I can only assume when motherboards started doing MCE and then nobody really noticed the power difference because there was only 4 cores at the time.

Link to post
Share on other sites
Just now, leadeater said:

That's the thing about boost clocks and TDP they are different specs saying different things, I don't know where this idea that these two should or have been directly tied to each other came from. I can only assume when motherboards started doing MCE and then nobody really noticed the power difference because there was only 4 cores at the time.

At some point I think it indicated a peak value, since then they started to just pick the number that looked better...

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

What is scaling and how does it work? Asus PB287Q unboxing! Console alternatives :D Watch Netflix with Kodi on Arch Linux Sharing folders over the internet using SSH Beginner's Guide To LTT (by iamdarkyoshi)

Sauron'stm Product Scores:

Spoiler

Just a list of my personal scores for some products, in no particular order, with brief comments. I just got the idea to do them so they aren't many for now :)

Don't take these as complete reviews or final truths - they are just my personal impressions on products I may or may not have used, summed up in a couple of sentences and a rough score. All scores take into account the unit's price and time of release, heavily so, therefore don't expect absolute performance to be reflected here.

 

-Lenovo Thinkpad X220 - [8/10]

Spoiler

A durable and reliable machine that is relatively lightweight, has all the hardware it needs to never feel sluggish and has a great IPS matte screen. Downsides are mostly due to its age, most notably the screen resolution of 1366x768 and usb 2.0 ports.

 

-Apple Macbook (2015) - [Garbage -/10]

Spoiler

From my perspective, this product has no redeeming factors given its price and the competition. It is underpowered, overpriced, impractical due to its single port and is made redundant even by Apple's own iPad pro line.

 

-OnePlus X - [7/10]

Spoiler

A good phone for the price. It does everything I (and most people) need without being sluggish and has no particularly bad flaws. The lack of recent software updates and relatively barebones feature kit (most notably the lack of 5GHz wifi, biometric sensors and backlight for the capacitive buttons) prevent it from being exceptional.

 

-Microsoft Surface Book 2 - [Garbage - -/10]

Spoiler

Overpriced and rushed, offers nothing notable compared to the competition, doesn't come with an adequate charger despite the premium price. Worse than the Macbook for not even offering the small plus sides of having macOS. Buy a Razer Blade if you want high performance in a (relatively) light package.

 

-Intel Core i7 2600/k - [9/10]

Spoiler

Quite possibly Intel's best product launch ever. It had all the bleeding edge features of the time, it came with a very significant performance improvement over its predecessor and it had a soldered heatspreader, allowing for efficient cooling and great overclocking. Even the "locked" version could be overclocked through the multiplier within (quite reasonable) limits.

 

-Apple iPad Pro - [5/10]

Spoiler

A pretty good product, sunk by its price (plus the extra cost of the physical keyboard and the pencil). Buy it if you don't mind the Apple tax and are looking for a very light office machine with an excellent digitizer. Particularly good for rich students. Bad for cheap tinkerers like myself.

 

 

Link to post
Share on other sites
12 minutes ago, Sauron said:

At some point I think it indicated a peak value, since then they started to just pick the number that looked better...

It's actually always been for base clock. Before Core 2 Turbo boost did not exist.

 

Pentium 4 (only has base clocks and no C States)

image.png.5b926d87c9ade738501a716a1ef72772.png

 

Then in Nehalem Intel introduced Turbo boost, TDP was then and now still based on base clock. It's literally never been anything else ever, not peak, not some weird abstracted value (looking at you AMD Zen).

Link to post
Share on other sites
54 minutes ago, SorryClaire said:

The question is: How long?

 

And also - how heavy a workload are we talking about?

Intel Core i7-8700K@4.7GHz, -0.095mV  ASUS ROG STRIX RTX 3080 Ti OC LC@2085MHz, 931mV + 20Gbps MEM OC  G.SKILL TridentZ RG4x8GB@3600MHz, 16-16-16-36, 1.35V 
Samsung 970 EVO Plus 1TB  Crucial MX500 2TB  Crucial MX300 1.05TB 

Corsair HXi 1200W 80-PLUS Platinum 

Link to post
Share on other sites
1 hour ago, SorryClaire said:

The question is: How long?

True

1 hour ago, BobVonBob said:

Clock speed ≠ performance only applies between different architectures, within the same architecture it's directly comparable. Processors with the same number of cores of each type should perform the same at the same clock speed. If they don't, Intel broke out the voodoo magic gimping for the T series.

I think I mentioned that here:

1 hour ago, J-from-Nucleon said:

however considering these chips use the same µarch and the top of the line for the T series is just 300MHz lower than the K series, it might mean they'll have pretty similar performance.

 

Link to post
Share on other sites
1 hour ago, Sauron said:

Intel's TDP numbers have long been little more than a suggestion, in reality to actually get the chip to perform at these speeds consistently you'll most likely need better cooling than a 35W TDP would imply.

The meaning of TDP and boost clocks has been misunderstood by the enthusiast niche for a while, and for better or worse people think/want it to mean something it doesn't.

 

The ultra-short version is if you have a cooler rated at TDP, you get at least base clock. Clocking above that is opportunistic. It is not directly indicative of power consumption, which is the most common misunderstanding. AMD's application of TDP is similar to Intel's, so it is not just an Intel thing.

 

57 minutes ago, leadeater said:

That's the thing about boost clocks and TDP they are different specs saying different things, I don't know where this idea that these two should or have been directly tied to each other came from. I can only assume when motherboards started doing MCE and then nobody really noticed the power difference because there was only 4 cores at the time.

Up to Kaby Lake peak "stock" (not MCE) power consumption with Prime95 like workloads was pretty close to TDP. It was around Coffee Lake where peak power draw started to significantly exceed TDP when run power unlimited - which is NOT an overclock condition, it is Intel giving system builders the choice of performance vs power usage. MCE is an overclock and TDP is meaningless if you have that on.

 

TV Gaming system: Asus B560M-A, i7-11700k, Scythe Fuma 2, Corsair Vengeance Pro RGB 3200@21334 4x16GB, MSI 3070 Gaming Trio X, EVGA Supernova G2L 850W, Anidees Ai Crystal, Samsung 980 Pro 2TB, LG OLED55B9PLA 4k120 G-Sync Compatible
Streaming system: Asus X299 TUF mark 2, i9-7920X, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Gigabyte 2070, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, Crucial BX500 1TB, BenQ XL2411 1080p144 + HP LP2475w 1200p60
Desktop Gaming system (to be retired): Asrock Z370 Pro4, i7-8086k, Noctua D15, G.Skill Ripjaws V 3200 2x8GB, Asus Strix 1080 Ti, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p144 G-sync

Former Main system (to be retired): Asus Maximus VIII Hero, i7-6700k, Noctua D14, G.Skill Ripjaws 4 3333@2133 4x4GB, GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, Acer RT280k 4k60 FreeSync [link]
Gaming laptop: Lenovo Legion, 5800H, DDR4 3200C22 2x8GB, RTX 3070, 512 GB SSD, 165 Hz IPS panel


 

Link to post
Share on other sites
4 hours ago, leadeater said:

It's actually always been for base clock. Before Core 2 Turbo boost did not exist.

 

Pentium 4 (only has base clocks and no C States)

image.png.5b926d87c9ade738501a716a1ef72772.png

 

Then in Nehalem Intel introduced Turbo boost, TDP was then and now still based on base clock. It's literally never been anything else ever, not peak, not some weird abstracted value (looking at you AMD Zen).

While this is true, up until Haswell or so the tdp suggestions were highly conservative to the point where even at boost (outside of particularly heavy loads) they managed to stay within the tdp references, which has changed  by a huge amount in the time since.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to post
Share on other sites
3 hours ago, porina said:

The meaning of TDP and boost clocks has been misunderstood by the enthusiast niche for a while, and for better or worse people think/want it to mean something it doesn't.

 

The ultra-short version is if you have a cooler rated at TDP, you get at least base clock. Clocking above that is opportunistic. It is not directly indicative of power consumption, which is the most common misunderstanding. AMD's application of TDP is similar to Intel's, so it is not just an Intel thing.

 

Up to Kaby Lake peak "stock" (not MCE) power consumption with Prime95 like workloads was pretty close to TDP. It was around Coffee Lake where peak power draw started to significantly exceed TDP when run power unlimited - which is NOT an overclock condition, it is Intel giving system builders the choice of performance vs power usage. MCE is an overclock and TDP is meaningless if you have that on.

 

Should have read this before making my comment hahaha. I was going to suggest Haswell as the demarcation line for basically always being under tdp reference, but I would also agree with the suggestion that coffee lake was the first era where it was significantly above.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to post
Share on other sites

bruh my cpu can do 6GHz for a whole 1.25 seconds i have such a beast machine

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

"A redline a day keeps depression at bay" - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 and 2 x Seagate ST2000DM006 (in RAID 0 for games!) - The good old Corsair GS700 - Yamakasi Catleap 2703 27" 1440p and ASUS VS239H-P 1080p 23" - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

 

Avid Miata autocrosser :D

Link to post
Share on other sites
9 hours ago, J-from-Nucleon said:

With the Core i5-12600T, however, it seems that Intel has eliminated the Gracemont cores all together

Everybody talking about boost clocks, meanwhile this part confuses me the most. "Gaming" 'K' CPUs get power-saving cores yet locked CPUs, obviously targeted at casual users doing casual things, don't get them? I thought their primary implementation would be low end CPUs so they would end up in some offices etc.,
but this doesn't make any sense to me.

 

Why have a hybrid CPU with potential scheduling inefficiencies (as everything new) on an unlocked part focused on very inefficient overclocking and higher base clocks/TDP??? Anyone has ideas?

I won't respond unless you quote me |  SSD tier list (Samsung drives are overpriced) - PSU tier list, because known brands have hazardous products - NVMe is not better than SATA for gaming - AMD is NOT your friend

Link to post
Share on other sites
1 hour ago, Ydfhlx said:

Everybody talking about boost clocks, meanwhile this part confuses me the most. "Gaming" 'K' CPUs get power-saving cores yet locked CPUs, obviously targeted at casual users doing casual things, don't get them? I thought their primary implementation would be low end CPUs so they would end up in some offices etc.,
but this doesn't make any sense to me.

 

Why have a hybrid CPU with potential scheduling inefficiencies (as everything new) on an unlocked part focused on very inefficient overclocking and higher base clocks/TDP??? Anyone has ideas?

In a desktop for the end user it doesn’t matter if you have low power cores or not. But if the OS can handle it well you don’t need to worry about sheduling becase for more stuff than you would believe the small cores are sufficient and you do not notice it at all what cores you are using unless you monitor it. 
 

This is my experience as an M1 Mac Mini owner. I shit you not, 90 % of the CPU time in my workflow is on the low powered cores. (Excel, AutoCAD (rosetta), MS teams (rosetta), mail, web browser, PDF editing etc). 

Lots of CPU time on high power cores is something I only see when playing the only game I play on the computer (civ 6) or running synthetic shit like cinebench. 

 


 



 

Link to post
Share on other sites

Intel's been doing 4.9Ghz and 5Ghz for a decade, bring up a 6Ghz stock chip guys.. 

CPU | Intel i7-8086K Overclocked 5.4Ghz | GPU | ASUS TUF RTX3080 | PSU | Corsair RM850i | RAM | 4x8GB Corsair Vengeance 3200MHz |MOTHERBOARD | Asus ROG Maximus X Formula | STORAGE | 2x Samsung Evo 970 256GB NVME  | COOLING | Hard Line Custom Loop O11XL Dynamic + EK Distro + EK Velocity  | MONITOR | Samsung G9 Neo | OS | Windows 10

Link to post
Share on other sites
12 hours ago, porina said:

The meaning of TDP and boost clocks has been misunderstood by the enthusiast niche for a while, and for better or worse people think/want it to mean something it doesn't.

 

The ultra-short version is if you have a cooler rated at TDP, you get at least base clock. Clocking above that is opportunistic. It is not directly indicative of power consumption, which is the most common misunderstanding. AMD's application of TDP is similar to Intel's, so it is not just an Intel thing.

TDP is a measurement of heat output more or less.

But i think that it's better to use power consumption rather than TDP which can be manipulated with different ambient temperatures.

A PC Enthusiast since 2011
AMD Ryzen 5 2600@3.9GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites
14 hours ago, J-from-Nucleon said:

however considering these chips use the same µarch and the top of the line for the T series is just 300MHz lower than the K series, it might mean they'll have pretty similar performance.

So 1 second. Nice.

Remember to press quote to get a response from someone! | Check people's edited comments! Press F5 to do so! | Remember to press that check mark on the right answer that solve your problem! | React to your heart, not peer pressure!

 

"Wow, you want me to say ara ara? Really?"

- Clock Girl

Self Introduction and Current Rig

 

🏳️‍⚧️ Rights

Link to post
Share on other sites
5 hours ago, Vishera said:

TDP is a measurement of heat output more or less.

But i think that it's better to use power consumption rather than TDP which can be manipulated with different ambient temperatures.

Well we don't actually have to guess, not when the actual specs and white paper is published. When we get that we'll get the PL1 and PL2 information and the per core and all core clocks, turbo tables etc etc with that.

 

As long as you are running stock Intel reference parameters you can pull this information for any Intel CPU and know how much power it will use, for how long and all number of wonderful information. The problem is when it comes to gaming motherboards it's rare that they run with the reference parameters, the same is actually true of laptops as well.

 

If everyone could learn to talk about PL1 and PL2, which are actually configurable values, then we could throw out TDP and never talk about it again like we should.

Link to post
Share on other sites
6 hours ago, Vishera said:

TDP is a measurement of heat output more or less.

But i think that it's better to use power consumption rather than TDP which can be manipulated with different ambient temperatures.

TDP isn't a measurement, it is a specification. It is effectively the minimum cooler you need to get at least the base performance level sustained.

 

Power consumption is the measurement you can manipulate, more so on Intel than AMD due to their different enforcement policies at "stock" running. Intel gives the system builder the choice of where they want to run the power limit, including practically unlimited, without breaking warranty. AMD picks one for you, which caps power usage but also caps performance. That power limit also isn't the same as TDP.

 

33 minutes ago, leadeater said:

The problem is when it comes to gaming motherboards it's rare that they run with the reference parameters, the same is actually true of laptops as well.

With such mobos there is a historic goal of getting the most performance out as possible. If running unlimited power limit is allowed by the CPU manufacturer, guess what they're going to set?

 

I'm not so sure about laptops. Those are thermally constrained systems, and all but the most extreme models will have to balance size and cooling capability. The cooling solution can be at, above or below TDP, and the CPU run accordingly. That is generally allowed by Intel, since their reference values are more of a serving suggestion than a requirement.

 

33 minutes ago, leadeater said:

If everyone could learn to talk about PL1 and PL2, which are actually configurable values, then we could throw out TDP and never talk about it again like we should.

Suggested PL1 = TDP, however the name change might finally get it through to some people that TDP is not by itself a power usage indicator. On higher (overclocking chipset) mobos, they are typically set PL1 = PL2 = unlimited by default. Cheaper non-OC boards are more likely to set PL1 and PL2 values to Intel suggestions, in part presumably because they're built cheaper without the expectation of massive power delivery needs.

TV Gaming system: Asus B560M-A, i7-11700k, Scythe Fuma 2, Corsair Vengeance Pro RGB 3200@21334 4x16GB, MSI 3070 Gaming Trio X, EVGA Supernova G2L 850W, Anidees Ai Crystal, Samsung 980 Pro 2TB, LG OLED55B9PLA 4k120 G-Sync Compatible
Streaming system: Asus X299 TUF mark 2, i9-7920X, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Gigabyte 2070, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, Crucial BX500 1TB, BenQ XL2411 1080p144 + HP LP2475w 1200p60
Desktop Gaming system (to be retired): Asrock Z370 Pro4, i7-8086k, Noctua D15, G.Skill Ripjaws V 3200 2x8GB, Asus Strix 1080 Ti, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p144 G-sync

Former Main system (to be retired): Asus Maximus VIII Hero, i7-6700k, Noctua D14, G.Skill Ripjaws 4 3333@2133 4x4GB, GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, Acer RT280k 4k60 FreeSync [link]
Gaming laptop: Lenovo Legion, 5800H, DDR4 3200C22 2x8GB, RTX 3070, 512 GB SSD, 165 Hz IPS panel


 

Link to post
Share on other sites
14 minutes ago, porina said:

I'm not so sure about laptops. Those are thermally constrained systems, and all but the most extreme models will have to balance size and cooling capability. The cooling solution can be at, above or below TDP, and the CPU run accordingly. That is generally allowed by Intel, since their reference values are more of a serving suggestion than a requirement.

Laptops often lower the PL1 or Intel offers a default TDP down configuration which is just a lower PL1. So if you're looking at laptops there is the issue of two laptops using the same CPU model but one has significantly higher PL1, 35W vs 25W, but no where is this actually stated in any of the laptop specs, it's actually really bad.

 

You can usually tell though, if one is a thin and light and the other is more standard size you can usually guess which is going to perform better or has the higher PL1 but this is not always true.

 

PL2 values can also be different too.

 

Laptops man, the gold standard in having to test the literal device to know how it will perform, CPU model isn't great when comparing like for like.

 

Also note about PL2, this one if set sufficiently high enough becomes less of an upper limit and what the power usage will be will depend on workload, instruction set used, thread utilization etc. My PL2 is 1000W, safe to say when it boosts it's not using 1000W lol. I'd really like to see a lot more power scaling benchmarks, from say 10W up to unlimited incrementing by 10W across from say Sandy Bridge to now. I think that would be really interesting. 

Link to post
Share on other sites
13 minutes ago, leadeater said:

Also note about PL2, this one if set sufficiently high enough becomes less of an upper limit and what the power usage will be will depend on workload, instruction set used, thread utilization etc. My PL2 is 1000W, safe to say when it boosts it's not using 1000W lol.

That's why I used the term "practically unlimited" previously. A limit isn't much of a limit if you're not going to reach it. AMD's power limits are set low enough that under most multithread loads, you're likely to hit it. Depending on the stress of the load, you'll instead see the clocks go down, rather than the power go up. On Zen, a heavy load similar to Prime95 can run many hundreds of MHz lower than a light workload like Cinebench. Running unlimited power on Intel we go the other way, the clocks remain largely unaffected, but power consumption varies with load. This gives a perception problem when it is poorly presented by the tech community. If you run an AVX-512 load without limit, you can get massive power draw. Many look at the power draw, neglecting that it is doing a LOT of work at the time. In my own testing I found Rocket Lake is about the same perf/watt as Comet Lake. Not surprising, given the same process tech, but the perf uplift is there and can be quite significant.

 

13 minutes ago, leadeater said:

I'd really like to see a lot more power scaling benchmarks, from say 10W up to unlimited incrementing by 10W across from say Sandy Bridge to now. I think that would be really interesting. 

I did a limited test in the past, which is somewhere on this forum but I can't find it again. It was much more limited, I think it was Zen 2 vs something-lake. No surprises, under typical running conditions Zen 2 was more power efficient. The slope of the curve was more interesting as AMD's was steeper. It remained more efficient than Intel at all tested points, but it was getting worse faster on the high end. I think that was an indication of its lower clock wall. At the other end, it scaled down better, making it more interesting for mobile devices.

TV Gaming system: Asus B560M-A, i7-11700k, Scythe Fuma 2, Corsair Vengeance Pro RGB 3200@21334 4x16GB, MSI 3070 Gaming Trio X, EVGA Supernova G2L 850W, Anidees Ai Crystal, Samsung 980 Pro 2TB, LG OLED55B9PLA 4k120 G-Sync Compatible
Streaming system: Asus X299 TUF mark 2, i9-7920X, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Gigabyte 2070, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, Crucial BX500 1TB, BenQ XL2411 1080p144 + HP LP2475w 1200p60
Desktop Gaming system (to be retired): Asrock Z370 Pro4, i7-8086k, Noctua D15, G.Skill Ripjaws V 3200 2x8GB, Asus Strix 1080 Ti, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p144 G-sync

Former Main system (to be retired): Asus Maximus VIII Hero, i7-6700k, Noctua D14, G.Skill Ripjaws 4 3333@2133 4x4GB, GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, Acer RT280k 4k60 FreeSync [link]
Gaming laptop: Lenovo Legion, 5800H, DDR4 3200C22 2x8GB, RTX 3070, 512 GB SSD, 165 Hz IPS panel


 

Link to post
Share on other sites
2 minutes ago, porina said:

AMD's power limits are set low enough that under most multithread loads, you're likely to hit it. Depending on the stress of the load, you'll instead see the clocks go down, rather than the power go up. On Zen, a heavy load similar to Prime95 can run many hundreds of MHz lower than a light workload like Cinebench.

I think much of this is also due to the voltage, clocks and temperature scaling of the Zen arch and TSMC 7nm as well. I'm sure if the CPUs could run more stable at the higher powers then power usage would be more Intel like, however Ryzen is extremely temperature dependent than it is vcore dependent.

 

Intel CPUs don't mind as much running hot so if you want to run higher clocks you for the most part just have to increase vcore to get it stable and then have enough cooling to stop Tj spikes that'll crash or lock the system.

 

AMD ~= 95% core temp limited

Intel  ~= 80% vcore limited

 

Or something like that.

Link to post
Share on other sites
14 hours ago, Spindel said:

This is my experience as an M1 Mac Mini owner. I shit you not, 90 % of the CPU time in my workflow is on the low powered cores. (Excel, AutoCAD (rosetta), MS teams (rosetta), mail, web browser, PDF editing etc). 

Yeh, but the post you quoted said specifically for gaming... tbh I dont *really* know what's the point of "little" cores but I do have the feeling those cpus are going to suck for gaming in general,  coupled with weird DDR5 shenanigans (latency issues) its probably gonna suck even more. We will see when there a proper benchmarks , but this is what it looks like to me currently. 

 

AMD stands for Advanced Micro Machines

-ColdFusion, 2021

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×