Jump to content

Qualcomm Snapdragon X Elite Adreno GPU matches AMD Radeon 780M in gaming. Additional metrics on the SOC as a whole revealed

filpo
2 hours ago, RejZoR said:

Anyone tried Linux on these ARM laptops? How does that work given that Windows on ARM is absolute poop? Is Linux on ARM processors any good? I'd totally have ARm laptop with Ubuntu on it or something.

Note: I’m literally thinking as I go.
 

One question in the back of my mind is if there’s enough motive in PC-space for a performant x86-64 emulator. 
 

Apple Rosetta shows it can be done, but outside pc gaming, there’s few consumer and even production applications that don’t already have ARM-compatible versions, and anything else (such as running old machinery) doesn’t need especially fast performance. 

 

 Nvidia does already make SoCs and has experience with fast ARM custom designs, so I feel it makes sense (if they felt the market worth the investment) that they’d probably be a big player in hypothetical desktop ARM SoCs. Fast x86 emulation could be a pretty big selling point for Nvidia, assuming Intel and AMD don’t step up to the plate. 
 

Intel could also be uniquely positioned to really capitalize on the PC gaming market, as they’ve extensive experience in x86 (they built it) and ARM designs (and everything in between), and have massively ramped up their GPU tech in a big way. Intel has the pieces they need, so if they’ve put investment into designing good ARM cores the past few years, the moment desktop shifts to ARM could see Intel snapping up huge market share very quickly. 
 

I’m unsure if dedicated GPUs will even be a thing in the ARM desktop world. Though there will always be a need for more compute (for more than just gaming), so I’m reasonably confident that dGPUs are not going away. In which case, the status quo seems more likely. I don’t see Qualcomm putting much investment into speedy x86 emulation, not like Apple did with Rosetta anyway.

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, RejZoR said:

Anyone tried Linux on these ARM laptops? How does that work given that Windows on ARM is absolute poop? Is Linux on ARM processors any good? I'd totally have ARm laptop with Ubuntu on it or something.

You can run linux on Macbooks with Mx chips, does that count?

Apart from that, you can run linux on some other ARM laptops, like the X13s, it runs fine, the problem is that the current ARM CPUs themselves are pretty weak, so it's hard to compare.

3 hours ago, Zodiark1593 said:

Note: I’m literally thinking as I go.
 

One question in the back of my mind is if there’s enough motive in PC-space for a performant x86-64 emulator. 
 

Apple Rosetta shows it can be done, but outside pc gaming, there’s few consumer and even production applications that don’t already have ARM-compatible versions, and anything else (such as running old machinery) doesn’t need especially fast performance. 

 

 Nvidia does already make SoCs and has experience with fast ARM custom designs, so I feel it makes sense (if they felt the market worth the investment) that they’d probably be a big player in hypothetical desktop ARM SoCs. Fast x86 emulation could be a pretty big selling point for Nvidia, assuming Intel and AMD don’t step up to the plate. 
 

Intel could also be uniquely positioned to really capitalize on the PC gaming market, as they’ve extensive experience in x86 (they built it) and ARM designs (and everything in between), and have massively ramped up their GPU tech in a big way. Intel has the pieces they need, so if they’ve put investment into designing good ARM cores the past few years, the moment desktop shifts to ARM could see Intel snapping up huge market share very quickly. 
 

I’m unsure if dedicated GPUs will even be a thing in the ARM desktop world. Though there will always be a need for more compute (for more than just gaming), so I’m reasonably confident that dGPUs are not going away. In which case, the status quo seems more likely. I don’t see Qualcomm putting much investment into speedy x86 emulation, not like Apple did with Rosetta anyway.

Linux already has x86 emulation stuff that's almost on par with Rosetta, see FEX-Emu and Box86/Box64.

It is said that there will be laptops with that SD Elite + Nvidia dGPUs. There are already ARM drivers for Nvidia, AMD and Intel GPUs on Linux, and I believe that should have internal ARM versions for Windows too.

 

I guess it'd make more sense for Nvidia to create ARM workstations with their Grace offerings instead of regular consumer devices, since those could be shipped with just linux for ML/DL development, akin to their DGX stations.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, igormp said:

I guess it'd make more sense for Nvidia to create ARM workstations with their Grace offerings instead of regular consumer devices, since those could be shipped with just linux for ML/DL development, akin to their DGX stations.

I think within a few years we will see NV consider shipping a laptop and regular desktop ARM SOC offerings. They have the ability to put peruser on both game and SW vendors but also on OEMs.   Im sure NV would love to sell SOC+RAM packages to OEMs just like they do for GPUs (think of the margin they are giving up right now only selling the GPU+VRAM when they could be selling all the core system chips)... once this AI/ML bubble flattens they will be back to take the PC windows gaming space for sure. 

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Zodiark1593 said:

Intel could also be uniquely positioned to really capitalize on the PC gaming market, as they’ve extensive experience in x86 (they built it) and ARM designs (and everything in between), and have massively ramped up their GPU tech in a big way.

Building the X86 cpu is not what is going to get you a good translation layer.  The high perf tools (like rossate2 are based on offline lifting and transcompiling.. this is all a compiler experts job and APPLE being the long term lead of the LLVM project are best placed for this).   

I would not expect intel to do that good a job, they have failed so many times when trying to ship new arcs that I just don't know if that ship can turn. 

 

19 hours ago, Zodiark1593 said:

I’m unsure if dedicated GPUs will even be a thing in the ARM desktop world.

I don't see why not however I do see the market shifting for sure.  When you end up building a ultra wide ARM design like apples you need enough bandwidth just to feed the CPU and NPU and all the video decode and encode you might as well put a mid sized GPU on the die otherwise most of that bandwidth will sit un-used almost all the time. 

Link to comment
Share on other sites

Link to post
Share on other sites

Tested windows for arm in parallels a bit it seems fine.  The translation software works pretty good without a huge performance hit.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/28/2024 at 9:22 PM, igormp said:

Linux already has x86 emulation stuff that's almost on par with Rosetta, see FEX-Emu and Box86/Box64.

Hmm, not really on par. I recently tried running some x86 services on an ARM server we got in the lab, and it isn't that compatible. 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 weeks later...
On 10/30/2023 at 7:03 PM, saltycaramel said:

First of all, what even is the equivalent Apple chip to compare against the Qualcomm Snapdragon X Elite (or SxE)? The M2, the M2 Pro or the M2 Max? Or maybe more appropriately the actual Apple chips for 2024 (M3, M3 Pro and M3 Max, being released tonight)?

 

The SxE is a 12 P-core homogeneous design, with no E-cores.

 

Additionally, the SxE, compared to Apple’s chips, is heavily skewed towards allocating its transistor budget (or die area if you will) to the CPU section of the SoC, and has a comparatively smaller GPU. Apple’s M3 chips will run circles around the SxE in terms of GPU, roughly with the M3 being 1x the SxE, the M3 Pro being 2x and the M3 Max being 4x in terms of GPU results, not to mention they’ll most certainly have hw ray tracing, which the SxE lacks (edit: or lacks temporarily, apparently).

 

The best way to approximate what the SxE is ballpark-comparable to could be “the CPU of an M2 Max in MT terms + the CPU of an M2 Pro in ST terms + the GPU of a base M2”. (Or swap “M3” for “M2” after tonight’s M3 unveiling)

 

Hence:

- comparing the ST performance to the M2 Max is theatre, ‘cause you might as well compare it to an M2 Pro or even a well cooled M2, it’s not like ST changes much accross the M2 family, but saying “faster in ST than the M2 Max” sounds more impressive

- comparing the MT performance to the humble base M2 (with its 4p+4e cores, vs 12p cores in the SxE) is…what is it, even? Really?

- comparing the GPU performance to the humble base M2…is cherry picking the only M2 chip the SxE can beat in terms of GPU. (Probably the base M3 tonight will take away even that)

 

Just pick one Apple chip to go against Qualcomm, and then be consistent with the comparison, power efficiency and all. 
 

The way they did it has been just confusing. 



So, yesterday at the “Copilot Plus PC” event Microsoft indulged a lot in direct comparisons vs the M3 MacBook Air. 

 

To add to my perplexities above (about comparing a 12-big-core CPU in the Snapdragon X Elite vs a 4-big-4-little-core CPU in the vanilla base M2/M3), one little trick the Surface Laptop 7 ARM has up its sleeve compared to the fanless MacBook Air M3:

 

IMG_5890.thumb.jpeg.1a0be2990eeb04d750212f26d7fff28c.jpeg

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/21/2024 at 11:46 AM, saltycaramel said:

To add to my perplexities above (about comparing a 12-big-core CPU in the Snapdragon X Elite vs a 4-big-4-little-core CPU in the vanilla base M2/M3), one little trick the Surface Laptop 7 ARM has up its sleeve compared to the fanless MacBook Air M3:

I mean, while a 4+4 core design trading punches with a 12-core design is impressive, realistically it doesn't matter.

What matters is performance (both single-core and multicore) and efficiency at various target wattages. Well, that and possibly features as well (like NPU performance). Of course price too.

 

As a consumer, if two processors perform the same, use the same wattage and have the same features at the same price, I don't really care if on paper one processor has 8 cores and the other one has 12 cores.

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, LAwLz said:

As a consumer, if two processors perform the same, use the same wattage and have the same features at the same price, I don't really care if on paper one processor has 8 cores and the other one has 12 cores.

100%. Number of cores specifically doesn't matter at all, it's what they can do, how and in what way etc. Apple big cores simply have more execution resources compared to other designs which is something anyone else could do but only if it makes sense for that ISA, architecture and platform etc.

 

Intel could release a CPU with 8 Pentium 4 cores and it'll be complete garbage, taking the number of cores doesn't matter to the extremes. Or AMD Bulldozer heh.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LAwLz said:

As a consumer

 

As a "geeky commentator" ranting about niche-y topics such as the subtleties of (what I perceive to be) apples-to-oranges CPU performance comparisons coming from the likes of Qualcomm and Microsoft, I don't mean any of this necessarily applies to the "as a consumer" perspective. That said, there's also something to be said about misleading consumers just-a-little-bit-even-if-in-the-end-it-doesn't-actually-matter. That's what I'm positing here.

 

My "tinfoil hat" theory is that the CPU part of the SxE/SxP is in a class that's too big (in terms of transistors) to be compared to the CPU part of an M3, but it's also too small to be compared to the CPU part of the M3 Pro.  

 

And yet both Qualcomm and now especially Microsoft have gone (pardon my French) balls deep with the Macbook Air M3 comparisons. It was almost unexpectedly egregious to hear how many times MS mentioned "M3 Macbook Air" during the event and even the hands on briefings featured Macbook Airs. So that's why we're talking about this, they've chosen the M3 and the fanless Macbook Air as comparison. They brought this upon themselves.

 

You mentioned "if they're the same price..", but "price" doesn't count as an anchoring factor here because these CPUs are only available within a whole SoC, and said SoCs are only available within complete systems. That's an inception, a double whammy of "you can't compare prices". Some SoCs have bigger GPUs or NPUs than others, and conversely less transistor budget allocated to CPU cores. Some complete systems have better displays or materials or build, and conversely less of the price allocated to the SoC or to the CPU part specifically. Some OEMs may allocate more of the price to the CPU and other OEMs may allocate more of the price to the display. Price comparisons of complete systems are moot and can't be used to establish the "class" of the CPU. 

 

Again, maybe the number of transistors allocated to the CPU part of the SoC could be a way to ballpark the class and to approximate if it's more appropriate to compare the SxE to an M3 or to an M3 Pro.

And yet the number of transistors in the Snapdragon X Elite is the most well kept secret on the internet.

We know it's fabbed on TSMC's N4P and it's maybe around 171mm^2 (whole SoC die).  

Apple's M2, as a comparison, is fabbed on TSMC's N4P as well and it's 155.25mm^2 (whole SoC die).

 

But even after applying the "transistor count criterion", we would hit the "thermal design criterion" wall. 

You mentioned "if they use the same wattage". Do they, tho? 

Is it true that out of this whole first wave of OEM designs

copilotversion5.png.43942ad479bac9dff6b776653706fe01.png

 

not a single one, not one, is a fanless design?

I've heard it over here:

https://youtu.be/PRK8P0dTysk?si=FbUpC1yAyG3tbOhI&t=561

 

Weird if true, given the obsession with comparing the SxE to the fanless M3 MBA. 

 

 

Does any of this matter to the "consumer"? 

Probably not. 

Yet, as far as online "CPU wars" go (or "CPU part of the SoC" wars, in this case), my point is I find it pretty convenient for Qualcomm/MS to pit the "probably slightly bigger", "probably slightly more power hungry" and "most definitely actively cooled" Snapdragon X Elite against the fanless M3. 

Personally I would pit the actively cooled SxE against an actively cooled M3 Pro and then we're talking. But that would be just as arbitrary. 

Unfortunately (or, inevitably) each party (Apple too is as guilty of this as others) will always come up with an explanation for the "comparison target" they picked, so there's no right answer, the answer is debating. Probably MS's reasoning here for picking the Macbook Air is the price anchoring, but as I've argued that doesn't really work for complete systems. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

100%. Number of cores specifically doesn't matter at all, it's what they can do, how and in what way etc.

That is only true for latency-insensitive preemptible applications. If you have something sensitive or that it doesn't behave well with preemption (e.g. OS), you are definitely going to notice. Less cores => more contention and context switches => higher latency. That is precisely why AMD/Intel/everybody else started dividing the functional units from a large CPU core into multiple smaller CPU cores. Also why it is possible and desirable to dedicate a core to the OS (e.g. consoles).

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, saltycaramel said:

not a single one, not one, is a fanless design?

I expected that to be the case as soon as they said it had 53% higher sustained performance than M3. Apple's fault for not putting an AirJet Mini 🙂

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, Forbidden Wafer said:

That is only true for latency-insensitive preemptible applications. If you have something sensitive or that it doesn't behave well with preemption (e.g. OS), you are definitely going to notice. Less cores => more contention and context switches => higher latency. That is precisely why AMD/Intel/everybody else started dividing the functional units from a large CPU core into multiple smaller CPU cores. Also why it is possible and desirable to dedicate a core to the OS (e.g. consoles).

That's really old, context switching penalty in modern CPUs isn't that large or much of a problem. Heck even 386 had in hardware (CPU) measures to make this not a big problem, Task State Segment (TSS).

 

And it's not why Intel/AMD etc are doing it, they are doing it because increasing frequency was impractical/impossible and both are actively making their CPU cores wider with more execution resources not smaller.

 

Almost everything you do on a computer is able to be decoded, preempted, cached, branch predicted and so on.

 

Typical L1 cache hit ratios are 95%, L2 I would imagine is getting near 70% on Intel given it's size and how Intel utilizes L2 cache.

 

Most actual applications that are latency sensitive are frequency sensitive because frequency and time are one and the same, network operations for example that are classically single threaded want frequency above all else even when lots of functions are hardware offloaded to the NIC processor since there is still DMA communication etc between the processors so the sooner those happen the better aka more times per second.

 

Edit:

Also it's not like single core single thread CPUs ever really had this problem. CPU context switching is a Computer Science theoretical thing that is good to know about but rarely ever a problem or needed to be looked at. 

 

Most switches are Software Switches too not Hardware Switches, handled by the OS so a lot of what you'd read about what happens to CPUs when context switching isn't actually happening either.

 

Quote

While selective flushing of the TLB is an option in software-managed TLBs, the only option in some hardware TLBs (for example, the TLB in the Intel 80386) is the complete flushing of the TLB on an address-space switch. Other hardware TLBs (for example, the TLB in the Intel 80486 and later x86 processors, and the TLB in ARM processors) allow the flushing of individual entries from the TLB indexed by virtual address.

 

Quote

In 2008, both Intel (Nehalem)[25] and AMD (SVM)[26] have introduced tags as part of the TLB entry and dedicated hardware that checks the tag during lookup.

https://en.wikipedia.org/wiki/Translation_lookaside_buffer

 

Quote

Recent Intel and AMD processors sport a tagged TLB, which allow you to tag a given translation with a certain address space configuration. In this scheme TLB entries never get "stale", and thus there is no need to flush the TLB.

https://wiki.osdev.org/Context_Switching

Link to comment
Share on other sites

Link to post
Share on other sites

It is very early days. Let's see where the dust settles once real product is tested. Even if there aren't exact matches with the Apple offerings, it is the nearest we have for now.

 

So many questions:

  • How does it perform with native Arm code vs emulated x86 - what native Apps are there?
  • What's the real battery life in real world situations (taking care to normalise things like display type and brightness)
  • On battery life, that's a mix of capacity vs power usage.
  • How is perf impacted by temperature? Think bursty load vs sustained.
  • Gaming???

Think about it in a particularly targeted way, like a business user. Microsoft and some other productivity apps reported going native on it, it might be enough for many users. Not all by any means, but I could see these being very popular for hybrid office users as well as field workers if they deliver on the claims.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/23/2024 at 4:30 PM, saltycaramel said:

Microsoft has gone (pardon my French) balls deep with the Macbook Air M3 comparisons.

 

I’m listening to this week’s WAN and Linus is perplexed by Microsoft (including Nadella himself in the media tour) going with the “finally we can be competitive with Apple’s offerings” angle. Linus’ objection is “Macs have, what, a market share of 16%? Why is MS mentioning Apple left and right?”.

 

3 and a half years after calling M1 Macs “glorified iPads” (or something to that effect), he still doesn’t fully get what is happening here.

 

Apple has changed the laptop landscape once again.

 

In the 2010s, Apple sparked the Ultrabook (or, MacBook Air lookalikes) movement.

 

This time around, Apple sparked the “ARM-based laptops with all day battery life” movement, which is now (exactly 4 years after the announcement of the Apple Silicon transition for Macs, an interesting experiment to determine the “turnaround time” for this sort of things: apparently 4 years is what took MS+Qualcomm to react to Apple’s move) in full-ahead mode in the Windows OEMs world.

 

Also, for those missing this critical piece of information, by far most of the PCs sold nowadays (be it Windows PCs, Macs or otherwise) are laptops. Laptops are the main battleground, not desktop PCs. And from late 2020 to mid 2024 there was a widespread perception Apple Silicon laptops were untouchable (except for gaming purposes).

 

When the first reviews of this new breed of ARM Windows laptops will be out, we’ll know if it’s actually time to challenge said “perception”. A perception Linus is apparently oblivious to, or he wouldn’t be perplexed by Microsoft openly trying to dispel it.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, saltycaramel said:

This time around, Apple sparked the “ARM-based laptops with all day battery life” movement, which is now (exactly 4 years after the announcement of the Apple Silicon transition for Macs, an interesting experiment to determine the “turnaround time” for this sort of things: apparently 4 years is what took MS+Qualcomm to react to Apple’s move) in full-ahead mode in the Windows OEMs world.

 

This is hardly an ARM thing and ARM won't be doing a lot better than x86 can. There are already laptops with just shy of 16 hour battery life which is below that of the 18 hours of a MacBook.

 

If anyone thinks there is going to be a significant gain in battery run time from ARM laptops, Qualcomm, then they are in for an unwanted awakening.

 

5 hours ago, saltycaramel said:

Also, for those missing this critical piece of information, by far most of the PCs sold nowadays (be it Windows PCs, Macs or otherwise) are laptops.

Pre 2010 is calling and wants it talking piece back 😉

 

pc-sales-canalys-report-100936265-large.jpg?auto=webp&quality=85,70

 

Laptops have been stomping on desktops in terms of sales for agesssss.

 

When everyone talks about ARM and any technical benefits like performance or power efficiency they are wrong, ISA doesn't matter for that. It really has nothing to do with why. That's also not my opinion either, it is but I have it because those actually creating ISA's, CPU architectures etc say it and I very much believe them and that came from the person that created Apple Silicon in the first place.

 

Also:

Quote

We designed a series of microbenchmarks to determine the power consumption of the instruction decoders in our x86-64 processor. We model the power consumption using linear regression analysis. Our linear model predicts the power consumption of different components including the execution units, instruction decoders, and L1 and L2 caches. The result demonstrates that the decoders consume between 3% and 10% of the total processor package power in our benchmarks. The power consumed by the decoders is small compared with other components such as the L2 cache, which consumed 22% of package power in benchmark #1. We conclude that switching to a different instruction set would save only a small amount of power since the instruction decoder cannot be eliminated completely in modern processors. In the future, we plan to port our microbenchmarks to an ARM platform. This allows us to directly compare the energy consumption of different components between two different architectures. Another interesting platform to benchmark would be the Intel Atom, which is a low-power x86 design. The Atom does not support AVX instructions, which should simplify the decoders and thus reduce their power consumption.

https://www.usenix.org/system/files/conference/cooldc16/cooldc16-paper-hirki.pdf

 

Modern ARM has decoders as well because ARM has and does complex instructions for a good while now so the amount saved by moving is less than the 3%-10% range above because you'd have to get the ARM equivalent range also then look at the difference. But lets just say it is worst case, 10% isn't a strong incentive to go through the pain of moving away from x86 compared to the other reasons for why it is happening in some segements and devices.

Link to comment
Share on other sites

Link to post
Share on other sites

Microsoft: a Qualcomm/ARM-based Surface Laptop 6 will perform up to twice as good in a battery test compared to an Intel/x86-based Surface Laptop 5 

 

Quote

 

Microsoft’s comparisons to the MacBook Air M3 also extend to battery life. During the tests, I saw Microsoft simulate battery life across web browsing and video playback. Microsoft uses a script to simulate web browsing. On 2022’s Intel-based Surface Laptop 5, it took eight hours, 38 minutes to completely deplete a battery; the new Surface Laptop lasted two times that, hitting 16 hours, 56 minutes. That beats the same test on a 15-inch MacBook Air M3, which lasted 15 hours, 25 minutes.

Microsoft ran a similar test for video playback, which saw the Surface Laptop last over 20 hours, with the MacBook Air M3 reaching 17 hours, 45 minutes. That’s also nearly eight hours more than the Surface Laptop 5, which lasted 12 hours, 30 minutes.

 

https://www.theverge.com/2024/5/30/24167745/microsoft-macbook-air-benchmarks-surface-laptop-copilot-plus-pc

 

 

2020 called and he wants his "battery life on ARM laptops won't be that much better than the battery life on existing/upcoming x86 laptops" talking points back..

 

On 5/26/2024 at 5:05 AM, leadeater said:

This is hardly an ARM thing and ARM won't be doing a lot better than x86 can. There are already laptops with just shy of 16 hour battery life which is below that of the 18 hours of a MacBook.

 

If anyone thinks there is going to be a significant gain in battery run time from ARM laptops, Qualcomm, then they are in for an unwanted awakening.

 

 

Of course correlation is not causation and it may all be a pure coincidence that ARM laptops happen to be more efficient, ISA has got nothing to do with it specifically. In Intel Lunar Lake we trust to put these ARM laptops back in their place once and for all, it's high time after 4 years. This time is different.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, saltycaramel said:

Microsoft: a Qualcomm/ARM-based Surface Laptop 6 will perform up to twice as good in a battery test compared to an Intel/x86-based Surface Laptop 5 

 

 

https://www.theverge.com/2024/5/30/24167745/microsoft-macbook-air-benchmarks-surface-laptop-copilot-plus-pc

 

 

2020 called and he wants his "battery life on ARM laptops won't be that much better than the battery life on existing/upcoming x86 laptops" talking points back..

 

 

 

Of course correlation is not causation and it may all be a pure coincidence that ARM laptops happen to be more efficient, ISA has got nothing to do with it specifically. In Intel Lunar Lake we trust to put these ARM laptops back in their place once and for all, it's high time after 4 years. This time is different.

Those are some very good results, but I'd be wary of these kinds of first-party benchmarks, especially since Microsoft isn't exactly what I'd call trustworthy when it comes to battery benchmarks.

See: Chrome vs Edge battery tests that nobody was able to replicate. 

 

If these laptops truly get that level of performance and battery life then I'll be very happy, because it seems to be far beyond what I have seen AMD- and Intel-equipped laptops get (regardless of what the advertised battery life says). 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, saltycaramel said:

Of course correlation is not causation and it may all be a pure coincidence that ARM laptops happen to be more efficient, ISA has got nothing to do with it specifically. In Intel Lunar Lake we trust to put these ARM laptops back in their place once and for all, it's high time after 4 years. This time is different.

Yep but looking at a Microsoft comparison of their own surface laptop that was Intel based from 2020 isn't exactly the most current metric or even the best possible at the time.

 

Unless you are picking the actual best option of the time and also, more importantly, equalizing on battery capacity then comparisons don't mean much in terms of ARM vs x86 other than device vs device. Choosing a laptop with 50% the battery Wh to another one then concluding the one with more battery Wh must be more efficient would be woefully flawed for example. But I'm pretty sure you know that.

 

Either way Qualcomm isn't going to have significantly better battery run time on their 28W SoCs compared to AMD 28W competitors.

 

Quote

Battery life – excellent runtimes with Hawk Point

There’s a 75 Wh battery inside this 2024 Asus Zenbook 14, properly sized for its segment.

Here’s what we got in our battery life tests, with the screen’s brightness set at around 120 nits (~50 brightness) and at its default 120Hz refresh rate. I’ve also set the Windows 11 power modes on Best Power Efficiency.

  • 5 W (~15 h of use) – idle, Standard Mode, screen at 50%, Wi-Fi ON;
  • 6.5 W (~12 h of use) – text editing in Google Drive, Standard Mode, screen at 50%, Wi-Fi ON;
  • 6 W (~12 h of use) – 1080p fullscreen video on Youtube in Edge, Standard Mode, screen at 50%, Wi-Fi ON;
  • 6 W (~12 h of use) – Netflix fullscreen in Edge, Standard Mode, screen at 50%, Wi-Fi ON;
  • 9W (~6-8 h of use) – browsing in Edge, Standard Mode, screen at 50%, Wi-Fi ON;
  • 38 W (~2 h of use) – Gaming  – Dota 2, Standard Mode, screen at 50%, Wi-Fi ON.

https://www.ultrabookreview.com/68296-asus-zenbook-14-oled-um3406ha/

 

Quote

I saw over 16 hours of battery life in our standard battery test, which loops a 4K file of the short film Tears of Steel. That result is well ahead of many competitors and in some cases about 50 percent better than the results recorded from alternatives

https://www.pcworld.com/article/2289137/asus-zenbook-14-oled-review-3.html

 

So to be honest I don't care about you quoting the run times of the 2020 Surface because it doesn't mean anything.

 

So again as I already stated ARM will not revolutionize battery run times and ARM has nothing to do with that itself. You could probably credit it with bringing efficiency and run time more in to focus but I would attribute that more specifically to Apple and Apple silicon not ARM.

 

If we are going to cherry pick to compare ISA's how about agree to pick best cases for both, not ones that you can find to back an already held belief. Go find the best possible.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, leadeater said:

Two things I would like to point out with that review however.

1) They set the screen brightness, in my opinion, really low. 120 nits.

2) They set the power mode to "best power efficiency", which is not exactly a fair way to compare battery life.

Basically, ultrabookreview disabled a bunch of cores, locked the clock speed and did a lot of other things I would not really consider typical use. It's in my opinion an unrealistic scenario.

3) ultrabookreview said that they got similar battery life with Intel Meteor Lake laptops. Something to keep in mind since I have seen you often talk about how AMD laptops gets longer battery life than Intel ones. According to this source you decided to use, that is not the case.

 

 

 

Is it just me or is it basically impossible to find information about what the different power modes in Windows actually do? 

 

 

I think we can both agree that these 12 hours of usage numbers are not realistic. You won't get anywhere near that with something like the Asus Zenbook UM3406HA.

In order for those reviews to get the battery life they are quoting, they had to bring the chip from its default 28W configuration down to 6 watts. It is not an honest, real-world test.

 

 

33 minutes ago, leadeater said:

So to be honest I don't care about you quoting the run times of the 2020 Surface because it doesn't mean anything.

I think it is a very relevant comparison because we can be fairly confident that the test they did were the same, in a controlled environment. 

Things like "browser test" are not comparable cross-reviews because it highly depends on which sites you browse and how you do it. 

 

I am not saying we should blindly trust the Microsoft benchmarks either, but I don't think the benchmarks you posted are that good, nor do they give a result that is comparable to the one Microsoft posted.

 

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, LAwLz said:

2) They set the power mode to "best power efficiency", which is not exactly a fair way to compare battery life.

Basically, ultrabookreview disabled a bunch of cores, locked the clock speed and did a lot of other things I would not really consider typical use. It's in my opinion an unrealistic scenario.

That as far as I know doesn't disable cores, they were talking Windows power modes. Unrealistic? Probably but the same will be done on these Qualcomm reviews coming up heh.

 

38 minutes ago, LAwLz said:

1) They set the screen brightness, in my opinion, really low. 120 nits.

Probably, but I don't have a direct way to look at 120nits to see what it would be like as I don't have a light meter

 

38 minutes ago, LAwLz said:

I think we can both agree that these 12 hours of usage numbers are not realistic. You won't get anywhere near that with something like the Asus Zenbook UM3406HA.

In order for those reviews to get the battery life they are quoting, they had to bring the chip from its default 28W configuration down to 6 watts. It is not an honest, real-world test.

That's not how power usage of CPUs works, you know this. 6W is the usage for that task, I don't think you honestly think the 8840HS will be using 28W to play a 1080 video through it's dedicated video decoder...

 

38 minutes ago, LAwLz said:

I think it is a very relevant comparison because we can be fairly confident that the test they did were the same, in a controlled environment. 

It's relevant only to comparing devices like I said. It's completely irrelevant to "ARM is more efficient than x86". Which was the whole point since everyone is praising the holy grail of ARM coming to save us from bad battery run times when it's not even that bad if you choose sensibly if that is what you want/need and that CPU/SoC in x86 land already have the same rated power figures as these Qualcomm ones so the only way to be more energy efficient is in micro time scales based on specific loadings of the SoC and performance requirements etc etc.

 

Looking at which CPU is actually more energy efficient is highly complicated if you actually want to do it thoroughly. 

 

Either way I'm pretty confident these Qualcomm devices aren't going to be blowing away the best x86 of the time when they come out.

 

38 minutes ago, LAwLz said:

3) ultrabookreview said that they got similar battery life with Intel Meteor Lake laptops. Something to keep in mind since I have seen you often talk about how AMD laptops gets longer battery life than Intel ones. According to this source you decided to use, that is not the case.

I think the latest, 2023 release, CPUs Intel is on par again or sometimes better. At least around mild usage and idle etc, once you start bringing that CPU load up then it's more than AMD. They are very competitive now yes. I just don't follow enough laptop reviews to be too current to know the best Intel one off hand.

 

I could have spent more time finding the absolute best I could but I didn't think it was necessary when a 12th Gen was present to the discussion.

 

38 minutes ago, LAwLz said:

I am not saying we should blindly trust the Microsoft benchmarks either, but I don't think the benchmarks you posted are that good, nor do they give a result that is comparable to the one Microsoft posted.

Given that the Microsoft quoted time for the old 12th Gen base laptop was in the 12 hour range I'm pretty sure a 8840HS would do a bit better than that. But we don't have that comparison of that specific test Microsoft did so 🤷‍♂️

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/30/2024 at 11:44 PM, leadeater said:

That as far as I know doesn't disable cores, they were talking Windows power modes. Unrealistic? Probably but the same will be done on these Qualcomm reviews coming up heh.

I couldn't find info about it (seriously, is it just me or does Microsoft not publish proper information about it) but it seems like it makes cores very aggressively get parked (as in, the core is shut down).

It doesn't get disabled as it would if you were to shut it down in the BIOS, but it won't get used for executing instructions and no power is sent to the core (if the chip allows that to happen).

 

As for the "it will be done in the other review too", this is why is why I think it's a terrible idea to try and compare battery benchmarks from two different outlets against one another. We have no idea how the tests were done, so one review can not, and should not, be used as a reference when comparing another review.

The same laptop might get 10 hours of battery life in one review, and 5 hours of battery life in another. Battery life tests, unless they are done with a specific battery life program, are only valid when compared to exactly the same battery test, which typically requires the test to be made by the same reviewer/outlet.

 

 

 

On 5/30/2024 at 11:44 PM, leadeater said:

Probably, but I don't have a direct way to look at 120nits to see what it would be like as I don't have a light meter

From what I can tell, the recommendation and typical brightness of screens to be used indoors is about 200 nits.

For comparison, my Lenovo X1 peaks at 500 nits.

The laptop in the review has a peak brightness of 600 nits (although that's with HDR content, 400 is the SDR peak).

 

If you ask me, I don't think 120 nits is comfortable to use. It is way too dim.

 

 

 

On 5/30/2024 at 11:44 PM, leadeater said:

That's not how power usage of CPUs works, you know this. 6W is the usage for that task, I don't think you honestly think the 8840HS will be using 28W to play a 1080 video through it's dedicated video decoder...

I think you misunderstood what I said. Of course I understand that it doesn't use the full TDP if it's decoding a video stream, since that barely uses any power at all.

Maybe I am misunderstanding how the test was made (it would really help if that website was more descriptive with how their tests were done), but it seems to me like they limited they limited the platform power to various levels. The numbers feel a bit off otherwise. I mean, I am 100% sure they limited the power since they put it in "best power efficiency" mode, which limits the power, but I feel like they did some tweaking between the tests. 

 

 

On 5/30/2024 at 11:44 PM, leadeater said:

It's relevant only to comparing devices like I said.

I don't understand what you mean by this sentence. Can you please elaborate?

I agree that a chip using 28 watts won't result in longer battery life than another one using 28 watts of power. But the equation isn't that simple. Power consumption in a vacuum is a meaningless number. It's only when you combine it with "amount of work done that it starts mattering. But "amount of work done" is something that changes from test to test, and it is only when you run the exact same test that you start being able to extrapolate the numbers we are after in this case (what battery life will these laptops get compared to x86 laptops).

 

I agree with your points but I think the links you posted can't and shouldn't be used to make the points you are trying to make. Microsoft comparing their previous Surface laptop to this new Snapdragon laptop in the same test and going "this is how much better our new laptop is" isn't invalidated by you posting a completely different test and going "but in this workload this other laptop gets similar battery life so this isn't impressive".

 

 

If we trust Microsoft, then what we do know is that their Snapdragon X Elite laptop will have significantly longer battery life than their previous Surface laptop (with an Alder Lake CPU). Why it gets better battery life remains to be seen.

It might be because the processor is more efficient (uses the same amount of power when executing but finishes the task in a shorter time).

It might be because of software-related changes (maybe they finally got the various C-states to work properly? I've heard that they don't work that well).

It might be because other parts of the laptop are more efficient (RAM, screen, motherboard, etc).

It might be because other parts of the chip might be more efficient. For example, the CPU portion might be comparable, but the video decoding block might be more efficient in the Snapdragon. Or maybe their test involved workloads that could be offloaded to the NPU on the Snapdragon but had to run on the CPU on the Intel platform.

 

Those are some questions that remain to be answered. In either case, it seems like if you were interested in buying a Surface laptop then this new one will be a significant upgrade.

 

 

 

On 5/30/2024 at 11:44 PM, leadeater said:

Given that the Microsoft quoted time for the old 12th Gen base laptop was in the 12 hour range I'm pretty sure a 8840HS would do a bit better than that. But we don't have that comparison of that specific test Microsoft did so 🤷‍♂️

It depends on the workload and if we are looking at the CPU itself, or the entire laptop. I think the statement you are making needs to be far more well-defined before we can confidently say yes or no.

At a medium to high degree of load on the CPU, with a long-running workload, the CPU itself on the 8840HS will most likely use less power to do the same amount of work as the Alder Lake CPU inside the Surface laptop 5. If that's the statement then I agree with you.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

I think you misunderstood what I said. Of course I understand that it doesn't use the full TDP if it's decoding a video stream, since that barely uses any power at all.

Maybe I am misunderstanding how the test was made (it would really help if that website was more descriptive with how their tests were done), but it seems to me like they limited they limited the platform power to various levels. The numbers feel a bit off otherwise. I mean, I am 100% sure they limited the power since they put it in "best power efficiency" mode, which limits the power, but I feel like they did some tweaking between the tests. 

Why would you think that? It's some fairly basic maths you can do to verify this. The device has a 75Wh battery, to do that amount of run time you can figure out the average total device power usage which would be 6W (75 / 6 = 12.5). That is a total of 1W more than completely idle to play that 1080p video.

 

You're wanting to find error that just isn't there, these CPUs/SoCs are very power efficient playing back video. Everything other than the CPU/SoC (screen, wifi, ssd, fans) is the majority of the power consumption in that type of test because CPU cores are idle and the video decode usage for 1080p video is extremely low.

 

1 hour ago, LAwLz said:

It depends on the workload and if we are looking at the CPU itself, or the entire laptop. I think the statement you are making needs to be far more well-defined before we can confidently say yes or no.

At a medium to high degree of load on the CPU, with a long-running workload, the CPU itself on the 8840HS will most likely use less power to do the same amount of work as the Alder Lake CPU inside the Surface laptop 5. If that's the statement then I agree with you.

That literally is the statement and everything is relative to that. Look 12th gen is far worse in every aspect of power efficiency, idle, medium, high usage so it is completely factually correct to state the 8840HS would do better. We can debate how much better but it doesn't change that it is. If 12th Gen mobile can't beat Ryzen 5000 mobile then it's not beating Ryzen 7000/8000 mobile. This isn't a difficult or erroneous extrapolation to make.

 

If, and this is only for argument sake, that the average across many reviews and many different workloads and test configurations give an average of 30% better power efficiency for CPU A vs CPU B then you can take that average and apply it to other tests that haven't directly compared these two CPUs. Unless the workload is sufficiently different from the average for some reason then it's going to give a decent approximation that isn't 100% accurate but it's also not 100% inaccurate either.

 

You can take the above further, if B is more power efficient than A and C is more power efficient than B then C is more power efficient than A. You can use the same law of averages to try and get an informative approximation of C vs A without a direct C vs A comparison if one is not available.

 

1 hour ago, LAwLz said:

I don't understand what you mean by this sentence. Can you please elaborate?

It's not complicated and I've already said it, if someone is going to try and dunk on x86 for being power inefficient and ARM being better then proceeded to choose a CPU from 2 years ago that wasn't the best x86 available in terms of power efficiency in the market then all that proves and the only data point that can be taken from that is that the person doing so it intentionally or unintentionally making a flawed comparison.

 

It would be like someone saying EV's don't accelerate faster than most ICE vehicles by choosing a Gen 1 Nissan Leaf and a 2023 Z06, completely flawed. B is faster than A, does nothing to prove or disprove the actual statement.

 

1 hour ago, LAwLz said:

I agree with your points but I think the links you posted can't and shouldn't be used to make the points you are trying to make. Microsoft comparing their previous Surface laptop to this new Snapdragon laptop in the same test and going "this is how much better our new laptop is" isn't invalidated by you posting a completely different test and going "but in this workload this other laptop gets similar battery life so this isn't impressive".

It has nothing to do with Microsoft or invalidating their tests. It has EVERYTHING to do with "lol x86, it's shit at power efficiency, here see this". It's not my fault someone provided such an easily countered and flawed data point/evidence to support their claim.

 

I don't know if what I picked is actually the absolute best of today, it won't be in future, but I do know it's better than what was presented.

 

Trying to insinuate that x86 itself is worse for power efficiency compared to ARM is wrong, it's not. People just refuse to listen to those that actually created Apple silicon in the first place, AMD Ryzen, Tesla (drive/vehicle), Intel (only kinda). x86 is not the problem. If Jim Keller is saying it's not specifically important for that aspect then it's not, this is an actually authoritative person on the matter.

 

I even provided a research paper on how much x86 decoding uses in power, legitimately what more evidence do you or anyone else want?

 

Qualcomm ARM is not going to 'double' the battery run times compared to similar TDP x86 laptops of the same time period and silicon node, not going to happen. To try and portray that it will is bad, this won't help anyone.

 

  

1 hour ago, LAwLz said:

 

Those are some questions that remain to be answered. In either case, it seems like if you were interested in buying a Surface laptop then this new one will be a significant upgrade.

If only that were actually what this particular conversation was about. Because it wasn't and it remains not. I think you have an issue with identifying the subject matter of a conversation.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×