Jump to content

Same results, lower power, worse name. Intel Core Ultra 7 155H beats 7840HS and 13500H in TimeSpy and Cinebench but battery life is nothing special

filpo

Summary

Chinese leakers have access to Intel's new 100 core CPUs and have gathered results with CPU and GPU tests as well as battery life

The latest benchmarks come from Bilbili content creator, 及格实验室, who has uploaded some of the results ahead of the 14th December launch of the next-gen Intel Meteor Lake CPUs. The two laptops tested were configured with the Core Ultra 7 155H 16-Core & Core Ultra 5 125H 14-Core CPUs.

 

Quotes

Benchmark results

Quote

INTEL-METEOR-LAKE-CORE-ULTRA-7-155H-1200x495.jpg

 

Quote

Here battery life was tested. In this test the Meteor Lake CPUs were demolished by the AMD Ryzen 7 7840HS "Zen 4" CPU which delivered vastly improved battery life (minutes) in standby, video play, and browsing.

Intel has a dedicated media engine that helps by offloading the video playback tasks from the compute tile to the low-power E-Cores on the SOC tile which should help conserve battery time. It is possible that this particular laptop may not be functioning correctly in regards to this feature.

Intel-Meteor-Lake-Core-Ultra-7-155H-Core-Ultra-5-125H-CPU-Benchmarks-Leak-_5.png

Well this doesn't look too good for Meteor Lake and with the Ryzen 8040 refresh, AMD has further upped its AI performance with a faster clocked XDNA engine, delivering up to 39 TOPS (16 TOPS From NPU). It's always better to wait for final reviews & results and we hope that Meteor Lake shows better performance than what's leaked out today. Once again, Intel's Meteor Lake "Core Ultra" CPUs debut next week so stay tuned for proper tests soon.

 

Quote

image.png.10ff892f51e99a509b90d8d86e6d24a6.png

 

Quote

Out of the two single-threaded tests, the AMD Ryzen 7 7840HS (Zen 4) and Core i5-13500H (Raptor Lake) won one each. In multi-threaded tests, the Core i5-13500H and Ryzen 7 7840HS again one 2 each while the Core Ultra 7 155H came out ahead in just one test.

intel-meteor-lake-core-ultra-7-155h-core-ultra-5-125h-cpu-benchmarks-leak-_2

 

intel-meteor-lake-core-ultra-7-155h-core-ultra-5-125h-cpu-benchmarks-leak-_3intel-meteor-lake-core-ultra-7-155h-core-ultra-5-125h-cpu-benchmarks-leak-_6

 

Quote

In OpenCL benchmarks which evaluate the GPU performance, the new Intel Arc Xe-LPG GPU in the Meteor Lake CPUs delivered almost 2x uplift versus the older Iris Xe GPU on the Raptor Lake CPU. However, the chip managed to stumble against the Radeon 780M "RDNA 3" iGPU. This could mean that AMD's OpenCL performance is too good with Intel focusing the bulk of their optimizations in DX11 and DX12 APIs.

Intel-Meteor-Lake-Core-Ultra-7-155H-Core-Ultra-5-125H-CPU-Benchmarks-Leak-_4.png

Specs of laptops used

Quote

The Intel Core Ultra 7 155H CPU features 16 cores in a 6+8+2 configuration, 22 threads, a base clock of 3.8 GHz a boost clock of up to 4.8 GHz, 24 MB of L3 cache, and a 28W TDP. Meanwhile, Intel's Core Ultra 5 125H offers up to 14 cores, 18 threads, up to 4.50 GHz boost clocks, & 18 MB total of L3 cache. The leaker doesn't mention what laptops were used or the particular configs that do play an important role when determining performance since thermal/power limits on various laptops can drastically affect performance.

 

In terms of performance, the Intel Core Ultra 7 155H & Core Ultra 5 125H "Meteor Lake" CPUs were tested against the Core i5-13500H "Raptor Lake" and Ryzen 7 7840HS "Phoenix" CPUs. The lineup is as follows:

  • Intel Core Ultra 7 155H - 16 Cores / 22 Threads / 4.8 GHz Boost / 24 MB L3 / 28W TDP
  • Intel Core Ultra 5 125H - 14 Cores / 18 Threads / 4.5 GHz Boost / 20 MB L3 / 28W TDP
  • Intel Core i5-13500H - 12 Cores / 16 Threads / 4.7 GHz Boost / 18 MB L3 / 45W TDP
  • AMD Ryzen 7 7840HS - 8 Cores / 16 Threads /  5.1 GHz Boost / 16 MB L3 / 35W TDP

My thoughts

These benchmarks don't look too bad on the GPU side of things and it's good to see that they've reduced the power consumption while performing the same, but strangely the battery life hasn't gotten that much better even if they reduced the TDP of the chip by 17W. I know TDP isn't everything but that should at least make some sort of a dent and be on track to the 7840HS. And since AMD are set to release their Phoenix and Hawk Point chips, Intel might be in hot water, especially with their advertising 👀

 

Sources

Intel Core Ultra 7 155H & Core Ultra 5 125H "Meteor Lake" CPU Benchmarks Leak: Poor Battery Times Versus AMD Ryzen 7040 APUs (wccftech.com)

Intel Core Ultra 7 155H Arc iGPU tested in 3DMark TimeSpy, faster than AMD Radeon 780M - VideoCardz.com

Message me on discord (bread8669) for more help 

 

Current parts list

CPU: R5 5600 CPU Cooler: Stock

Mobo: Asrock B550M-ITX/ac

RAM: Vengeance LPX 2x8GB 3200mhz Cl16

SSD: P5 Plus 500GB Secondary SSD: Kingston A400 960GB

GPU: MSI RTX 3060 Gaming X

Fans: 1x Noctua NF-P12 Redux, 1x Arctic P12, 1x Corsair LL120

PSU: NZXT SP-650M SFX-L PSU from H1

Monitor: Samsung WQHD 34 inch and 43 inch TV

Mouse: Logitech G203

Keyboard: Rii membrane keyboard

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

Damn this space can fit a 4090 (just kidding)

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, filpo said:

These benchmarks don't look too bad on the GPU side of things and it's good to see that they've reduced the power consumption while performing the same, but strangely the battery life hasn't gotten that much better even if they reduced the TDP of the chip by 17W.

Videocardz article claims the battery capacity of the laptops is unknown, so meaningful comparison of power isn't possible right now. TDP might, at best, give an indication of power usage under sustained load. Basically there is not enough info to draw any useful conclusions in this area. We'll find out possibly in a week or so when it launches.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/9/2023 at 8:59 AM, filpo said:

These benchmarks don't look too bad on the GPU side of things and it's good to see that they've reduced the power consumption while performing the same, but strangely the battery life hasn't gotten that much better even if they reduced the TDP of the chip by 17W. I know TDP isn't everything but that should at least make some sort of a dent and be on track to the 7840HS. And since AMD are set to release their Phoenix and Hawk Point chips, Intel might be in hot water, especially with their advertising 👀

PL2 on laptops CPUs are vastly higher than TDP, to the point TDP is near not relevant unless you do lots of very long sustained workloads that put the CPU in to PL1 the entire time, otherwise the power usage is PL2 over total time in that state.

 

image.png.6c1330c23fff41ef160607a76bc67f0b.png

https://cdrdv2.intel.com/v1/dl/getContent/743844

 

As you can see in the above PL2 is way higher than TDP, way way higher. AMD's is typically 35% above TDP unless vendor has changed it as it is configurable (user software configurable too).

 

Sure these CPUs are 28W TDP but a realistic expected PL2 figure in my opinion for these would be ~70W. The 7840HS PPT (sustained boost) is 47W and is what appears to be commonly configured on actual laptops on the market.

 

Aside from battery capacity differences the AMD CPU being a little better for battery run time is probably correct. Although Intel 13th Gen is actually quite close to sometimes better depending on configured CPU power for the AMD CPU and workloads being done so 14th Gen could actually be ever so slightly better.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

PL2 on laptops CPUs are vastly higher than TDP, to the point TDP is near not relevant unless you do lots of very long sustained workloads that put the CPU in to PL1 the entire time, otherwise the power usage is PL2 over total time in that state.

Since it was cropped out, the three columns of numbers are minimum, recommended and maximum possible settings. PL1 dictates the minimum long term potential of the cooling solution, but tau could vary if the cooling is above minimum requirements or has relatively high thermal mass. An over-optimistic tau value could end up with thermal throttling.

 

With that in mind, the recommended value is of the magnitude of a minute or so. So for tasks much shorter than that, it could boost up to PL2. For tasks much longer than that, it tends to PL1. You could also end up in between if the workload while sustained isn't that intense. These are limits, not a target!

 

I don't know if it applies to x86 but in mobile phone like applications, it is a power optimisation technique to bunch up operations so you run the CPU hard for a short time, rather than multiple smaller loads over a longer time. It's a time integrated power usage, considering idle savings also. Intel implement this for video decode on Arc. Do a batch decode then sleep longer rather than wake up each frame. I have no idea what the support level is like.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, porina said:

I don't know if it applies to x86 but in mobile phone like applications, it is a power optimisation technique to bunch up operations so you run the CPU hard for a short time, rather than multiple smaller loads over a longer time. It's a time integrated power usage, considering idle savings also. Intel implement this for video decode on Arc. Do a batch decode then sleep longer rather than wake up each frame. I have no idea what the support level is like.

I don't think this happens on the CPU side outside of standard instruction decoding and queuing, and branch prediction but less so. Purposefully delaying things would really just give a worse user experience or inconsistent performance since video playback has buffering and you just need to deliver enough frames in time to not run out, how and in what way essentially doesn't matter.

 

CPUs are pretty quick at changing power states now and Intel is leaning quite hard in to the hybrid magic. Even the table I posted isn't fully detailed and exacting, those values are for the P cores not E cores.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Purposefully delaying things would really just give a worse user experience or inconsistent performance 

It is for non-user actions, more for background processes like checking for push content, e-mail, social media, OS-stuff, that kinda thing.

 

2 minutes ago, leadeater said:

CPUs are pretty quick at changing power states now 

Don't know what the values are, but there are finite wake up times and return to sleep levels. Deeper sleep, longer wake up, so you can still gain by reducing those transitions.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, porina said:

With that in mind, the recommended value is of the magnitude of a minute or so. So for tasks much shorter than that, it could boost up to PL2. For tasks much longer than that, it tends to PL1. You could also end up in between if the workload while sustained isn't that intense. These are limits, not a target!

It's actually more complicated because of Thermal Velocity Boost. P cores are quite aggressive in the frequency and power boosting so being at or very close to PL2 is pretty much the norm. Games for example don't typically fully utilize the cpu cores however the way I look at it is CPUs will execute something as fast as it possibly can in the allowed frequency and power envelope so if work is dispatched from the queue to all or enough execution units then it'll spike to PL2 even if that's at the micro time scale.

 

So I would say it's a factor of how often the queue has something in it to do and what is in it actually populates execution resources. Personally I assume execution resources are more populated than less but may not always be doing work than the inverse of this.

 

For laptops from what I've seen battery run time primary differences is from vendor configuration and optimizations than just CPU vs CPU. You can have a more efficient CPU or a lower set TDP but then lose all that to a badly calibrated screen. Desktops are easier 🙂 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, porina said:

It is for non-user actions, more for background processes like checking for push content, e-mail, social media, OS-stuff, that kinda thing.

For those the OS itself delays them not the CPU, once the CPU is given something it does it. This is an OS side thing rather than CPU.

 

13 minutes ago, porina said:

Don't know what the values are, but there are finite wake up times and return to sleep levels. Deeper sleep, longer wake up, so you can still gain by reducing those transitions.

Very quickly, C and P states aren't the only power control interactions in a CPU. AMD CPUs for example can change power and frequency states every 25Mhz.

 

Similar should be true of Intel

Quote

As of the Skylake architecture, the operating system can leave the control of the P-states to the CPU (Speed Shift Technology, Hardware P-states).[3] With Kaby Lake, these functions have been further optimized.[4]

https://www.thomas-krenn.com/en/wiki/Processor_P-states_and_C-states#cite_note-4

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/10/2023 at 2:28 PM, leadeater said:

For those the OS itself delays them not the CPU, once the CPU is given something it does it. This is an OS side thing rather than CPU.

 

Very quickly, C and P states aren't the only power control interactions in a CPU. AMD CPUs for example can change power and frequency states every 25Mhz.

 

Similar should be true of Intel

https://www.thomas-krenn.com/en/wiki/Processor_P-states_and_C-states#cite_note-4

Certain things are done much better with Intel, particularly if you're overclocking. Which is pretty rough with AMD for some reason. Maybe Ryzen 7000 series changed that, but for 5000 series it wasn't good. You can either run it full tilt when overclocked or none of that. I've had many years of Intel CPU's that were overclocked to the edge of capability, but were always ramping voltages and clocks up and down depending on load. I really liked how you could have two separate voltages on Intel for "idle" when CPU isn't under load and when it is under load. This way you could actually undervolt it for idle and overvolt for load scenarios and it was amazing. With 5800X and 5800X3D (which is a bit specific because of the 3D Vcache anyway) all I could do is undervolt it as far as I could without affecting upper performance. So I just ensured it uses least voltage to produce least heat and power draw to consistently hit max clocks. And that's it. You can't set multiplier higher in any way, not even on 5800X without being forced to run the CPU at that multiplier at all times. Which is just stupid, I don't want to run my CPU at 4.9 GHz or whatever at all times. And I have anything but budget board (ASUS ROG Strix X570-E). Maybe even higher boards have adaptive multiplier and voltages option, but for Intel even lower tier ones had all these settings available.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, RejZoR said:

Certain things are done much better with Intel, particularly if you're overclocking. Which is pretty rough with AMD for some reason.

TSMC is not and has never been a frequency optimized process node because few customers actually want that. Also AMD's CPU cores, the logic, is more dense than Intel's which impacts achievable frequencies. Actually a few reasonable and understandable reasons why. Intel having their own fabs allows them to do actually what they want, that's always an advantage, if they achieve it heh.

 

6 hours ago, RejZoR said:

I really liked how you could have two separate voltages on Intel for "idle" when CPU isn't under load and when it is under load.

AMD has and does that, all CPUs do actually. I'm not sure everyone lets you set both however running the lowest load voltage possible will drop the idle voltage and you can also increase LLC so load voltage drops less so you can set it lower. Even if you can't directly set the idle voltage to as low as were possible with specific control of that the difference is actually minor, power is drawn not pushed and voltage is only part of that equation.

 

Many things aren't actually worth worry about and for AMD that is one of them. CPUs internally power gate and ramp down voltage and clocks.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, leadeater said:

TSMC is not and has never been a frequency optimized process node because few customers actually want that. Also AMD's CPU cores, the logic, is more dense than Intel's which impacts achievable frequencies. Actually a few reasonable and understandable reasons why. Intel having their own fabs allows them to do actually what they want, that's always an advantage, if they achieve it heh.

 

AMD has and does that, all CPUs do actually. I'm not sure everyone lets you set both however running the lowest load voltage possible will drop the idle voltage and you can also increase LLC so load voltage drops less so you can set it lower. Even if you can't directly set the idle voltage to as low as were possible with specific control of that the difference is actually minor, power is drawn not pushed and voltage is only part of that equation.

 

Many things aren't actually worth worry about and for AMD that is one of them. CPUs internally power gate and ramp down voltage and clocks.

It doesn't if it's constantly running at almost 5GHz because it has no concept of variable multiplier. Because for some dumb reason you can't set it as such. It's either AUTO or I don't know, 48x. And when you set it to 48x (instead of factory 45x), it just runs at 4800MHz at all times. Intel's do that only if you set them to run so (SpeedStep or whatever it's called set to OFF). Something everyone kept saying to keep disabled and I've ran it in adaptive mode for over a decade across several CPU's and the OC worked just fine.

 

On AMD, it's all baked in since Ryzen and only thing that even actually does anything of worth is adjusting the Curve to negative as far as it can go without affecting stability or performance. And then just let CPU do its own thing. Doing manual overclocks is just so time consuming and gives no gains unless you only do all core loads and nothing else. Doing that for games even makes worse performance.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, RejZoR said:

It doesn't if it's constantly running at almost 5GHz because it has no concept of variable multiplier. Because for some dumb reason you can't set it as such. It's either AUTO or I don't know, 48x. And when you set it to 48x (instead of factory 45x), it just runs at 4800MHz at all times. Intel's do that only if you set them to run so (SpeedStep or whatever it's called set to OFF). Something everyone kept saying to keep disabled and I've ran it in adaptive mode for over a decade across several CPU's and the OC worked just fine.

 

On AMD, it's all baked in since Ryzen and only thing that even actually does anything of worth is adjusting the Curve to negative as far as it can go without affecting stability or performance. And then just let CPU do its own thing. Doing manual overclocks is just so time consuming and gives no gains unless you only do all core loads and nothing else. Doing that for games even makes worse performance.

At least before Ryzen 7000 any and all OC lead to worse performance anyway, PBO, voltage and power adjustments were the only good go to's and leave multiplier alone. Going with fixed multiplier just meant lower single and dual core boost, not really worth it almost always.

 

But like I said some things are not worth worrying about and Intel is not AMD and vice versa. CPUs are never going to be the same, just be aware the fault is trying to make a product do something purely because another different one can.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×