Jump to content

Qualcomm Snapdragon X Elite Adreno GPU matches AMD Radeon 780M in gaming. Additional metrics on the SOC as a whole revealed

filpo

Summary

Qualcomm recently unveiled some performance metrics for their Snapdragon X Elite Adreno SOC and its performance appears to match the performance of the 780M in both 28W and 80W variants as well as scoring nearly double the M2 chip in Cinebench 2024 (Multi Threaded)

Two reference designs for the SOCs have also been showcased (seen below)

image.png.c9d43e285ab27093316f6c6e1a837512.png

image.png.b62fc72d50e89257b276704c5a0569d0.png

image.png.f37dddc762106b1ea858846b4197f837.png

 

Quotes

Quote

Qualcomm recently lifted the embargo on performance data shared with journalists last week, showcasing their new Snapdragon X Elite chip tailored for lightweight laptops. The company boldly compared their chip to leading industry counterparts, such as the AMD Ryzen 9 7940HS, Intel Core i9-13800H, and Apple M2. The competition appears intense, and Qualcomm’s performance metrics may suggest it stands alongside the prominent brands.

 

Quote

During the hand-off presentation, Qualcomm had two reference laptops set up for the audiences which included a 28W efficiency-focused model and an 80W high-performance variant. The thin & light configuration was running a Snapdragon X Elite CPU with a 4.0 GHz (2-core) and 3.4 GHz (all-core) boost frequency while the performance-focused system was running the chip with a clock of 4.3 GHz (2-Core) and 3.8 GHz (All-Core) boost.

 

 

Laptop #1 - Snapdragon X Elite CPU 80W / 4.3-3.8 GHz Boost / 16.5" Display / LPDDR5x-8533

Laptop #2 - Snapdragon X Elite CPU 28W / 4.0-3.4 GHz Boost / 14.5" Display / LPDDR5x-8533

 

Quote

There are additional differences for both reference laptops with the 80W configuration using a 16.5" screen with a maximum resolution of 3840x2160 and a bigger 87Wh battery while the 28W configuration uses a 14.5" OLED screen with a maximum resolution of 2880x1800 and a 58Wh battery. Both models featured active cooling and were equipped with the latest LPDDR5x-8533 system memory. The thinner laptop, as the name suggests, was just 15mm thick while the 80W variant had a thickness of 16.8mm.

 

Quote

In Cinebench 2024, the Snapdragon X Elite CPU with its Oryon core architecture is said to offer up to 20% higher single-core and twice the multi-threaded performance of the competition. Doing a breakdown, the 80W chip is 9% faster than the Apple M2, 15% faster than the i7-13800H, and 21% faster than the Ryzen 9 7940HS chip while the 28W X Elite CPU is 1%, 6%, and 12% faster than the same chips, respectively.

 

Looking at multi-threaded scores, the X Elite 80W laptop is 22% faster than Intel, 25% faster than AMD, 28% faster than the 28W chip, and over 2x faster than the Apple M2 CPU. Similarly, the 28W configuration is neck to neck with the much higher wattage Apple and Intel counterparts and 66% faster than the Apple M2.

 

Quote

AMD is set to introduce their Strix Point series, featuring the newer Zen5 CPU architecture, while Intel is just 1.5 months away from the Meteor Lake launch. Apple also has an upcoming event, possibly to unveil the M3 chip. Consequently, while Qualcomm’s X Elite chip may currently make a strong impression, the PC market will have evolved considerably by the time it becomes available.

 

My thoughts

I think that it's quite beneficial that we have many competitors such as AMD and Apple as well as Intel but as we are only about 1 and a half months away from the next gen chips from Intel and AMD, I believe we have to 'wait and see' until about Christmas or even January before arising to decisions, however these metrics do look promising

 

Sources

Qualcomm Snapdragon X Elite Adreno GPU performance matches AMD Radeon 780M in gaming - VideoCardz.com

Qualcomm Unveils Even More Snapdragon X Elite PC CPU Benchmarks: 23W & 80W Reference Laptops Tested (wccftech.com)

image.png

Message me on discord (bread8669) for more help 

 

Current parts list

CPU: R5 5600 CPU Cooler: Stock

Mobo: Asrock B550M-ITX/ac

RAM: Vengeance LPX 2x8GB 3200mhz Cl16

SSD: P5 Plus 500GB Secondary SSD: Kingston A400 960GB

GPU: MSI RTX 3060 Gaming X

Fans: 1x Noctua NF-P12 Redux, 1x Arctic P12, 1x Corsair LL120

PSU: NZXT SP-650M SFX-L PSU from H1

Monitor: Samsung WQHD 34 inch and 43 inch TV

Mouse: Logitech G203

Keyboard: Rii membrane keyboard

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

Damn this space can fit a 4090 (just kidding)

Link to comment
Share on other sites

Link to post
Share on other sites

Desktop ARM is truly exciting, just gotta hope that Microsoft fixes Windows on ARM. I'm also interested to see how gaming on Linux ARM will work. We can kinda see this with the current open-source work on OpenGL for Apple Silicon, but definitely needs work for an x86 translation layer, or Steam and/or Vulkan support.

Link to comment
Share on other sites

Link to post
Share on other sites

The confusing (and, let’s admit it, intentionally misleading) ways Qualcomm is doing the comparisons with Apple Silicon has been giving me headaches, both at the time of the initial unveiling and at today’s controlled-environment press hands-on event. 
 

It’s a constant game of apples to oranges comparisons and deliberate confusion.

 

First of all, what even is the equivalent Apple chip to compare against the Qualcomm Snapdragon X Elite (or SxE)? The M2, the M2 Pro or the M2 Max? Or maybe more appropriately the actual Apple chips for 2024 (M3, M3 Pro and M3 Max, being released tonight)?

 

The SxE is a 12 P-core homogeneous design, with no E-cores.

 

Additionally, the SxE, compared to Apple’s chips, is heavily skewed towards allocating its transistor budget (or die area if you will) to the CPU section of the SoC, and has a comparatively smaller GPU. Apple’s M3 chips will run circles around the SxE in terms of GPU, roughly with the M3 being 1x the SxE, the M3 Pro being 2x and the M3 Max being 4x in terms of GPU results, not to mention they’ll most certainly have hw ray tracing, which the SxE lacks (edit: or lacks temporarily, apparently).

 

The best way to approximate what the SxE is ballpark-comparable to could be “the CPU of an M2 Max in MT terms + the CPU of an M2 Pro in ST terms + the GPU of a base M2”. (Or swap “M3” for “M2” after tonight’s M3 unveiling)

 

Hence:

- comparing the ST performance to the M2 Max is theatre, ‘cause you might as well compare it to an M2 Pro or even a well cooled M2, it’s not like ST changes much accross the M2 family, but saying “faster in ST than the M2 Max” sounds more impressive

- comparing the MT performance to the humble base M2 (with its 4p+4e cores, vs 12p cores in the SxE) is…what is it, even? Really?

- comparing the GPU performance to the humble base M2…is cherry picking the only M2 chip the SxE can beat in terms of GPU. (Probably the base M3 tonight will take away even that)

 

Just pick one Apple chip to go against Qualcomm, and then be consistent with the comparison, power efficiency and all. 
 

The way they did it has been just confusing. 
 

ps: someone smarter than me should explain what’s the deal with the Linux GB6 ST score of 3200 that according to Anandtech benefits from the fans being at full blast at all times…was it misleading to quote that score in their presentation or is it fair game? The Windows GB6 ST score with a normal fan curve profile is much closer to the M2 score, probably on par with the M3 score.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, saltycaramel said:

being released tonight)?

not released tonight but showcased, but ye I get what you mean

 

1 minute ago, saltycaramel said:

First of all, what even is the equivalent Apple chip to compare against the Qualcomm Snapdragon X Elite (or SxE)? The M2, the M2 Pro or the M2 Max? Or maybe more appropriately the actual Apple chips for 2024 (M3, M3 Pro and M3 Max, being released tonight)?

I feel like it should be the M2 (or now M3) as Qualcomm have released a seemingly slightly more budget friendly version (I assume it'll be priced at about $1500) for an APU system

 

2 minutes ago, saltycaramel said:

Additionally, the SxE, compared to Apple’s chips, is heavily skewed towards allocating its transistor budget (or die area if you will) to the CPU section of the SoC, and has a comparatively smaller GPU. Apple’s M3 chips will run circles around the SxE in terms of GPU, roughly with the M3 being 1x the SxE, the M3 Pro 2x and the M3 Max 4x in terms of GPU results, not to mention they’ll most certainly have hw ray tracing which the SxE lacks. 

The thing that Apple lacks now is the optimization for games (or most games). Once they get that they'll be flying

 

3 minutes ago, saltycaramel said:

ps: someone smarter than me should explain what’s the deal with the Linux GB6 ST score of 3200 that according to Anandtech benefits from the fans being at full blast at all times…was it misleading to quote that score in their presentation or is it fair game? The Windows GB6 ST score with a normal fan curve profile is much closer to the M2 score, probably on par with the M3 score.

I was confused by that too. Unfortunately, I don't have an answer 

 

4 minutes ago, saltycaramel said:

The way they did it has been just confusing. 

completely agree, they may have tried to just push out some quick benchmarks before Apple, AMD and Intel showcase their new chips in hopes of being 'The first ones to arrive to the race, but not necessarily the winners'

Message me on discord (bread8669) for more help 

 

Current parts list

CPU: R5 5600 CPU Cooler: Stock

Mobo: Asrock B550M-ITX/ac

RAM: Vengeance LPX 2x8GB 3200mhz Cl16

SSD: P5 Plus 500GB Secondary SSD: Kingston A400 960GB

GPU: MSI RTX 3060 Gaming X

Fans: 1x Noctua NF-P12 Redux, 1x Arctic P12, 1x Corsair LL120

PSU: NZXT SP-650M SFX-L PSU from H1

Monitor: Samsung WQHD 34 inch and 43 inch TV

Mouse: Logitech G203

Keyboard: Rii membrane keyboard

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

Damn this space can fit a 4090 (just kidding)

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/30/2023 at 7:09 PM, filpo said:

not released tonight but showcased, but ye I get what you mean


(by the way, the M3 chips will be in customers’ hands next week…at this pace the actual year-for-year competitor of the “mid 2024” Snapdragon X Elite may end up being the M4 😄)

Link to comment
Share on other sites

Link to post
Share on other sites

Would be nice if it worked, but seeing how Microsoft is fumbling around Windows for ARM for over a decade and it's still terrible, I have very little faith in it. Maybe ARM powered laptop will make sense with Linux distro that can run on ARM and has good support and backwards compatibility, but I have almost zero faith in it on Windows. Would be nice, but probably won't ever work well.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, RejZoR said:

Would be nice if it worked, but seeing how Microsoft is fumbling around Windows for ARM for over a decade and it's still terrible, I have very little faith in it. Maybe ARM powered laptop will make sense with Linux distro that can run on ARM and has good support and backwards compatibility, but I have almost zero faith in it on Windows. Would be nice, but probably won't ever work well.

Windows on ARM is great my guy for the 9 people that use it.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 5 months later...
On 10/30/2023 at 7:03 PM, saltycaramel said:

The confusing (and, let’s admit it, intentionally misleading) ways Qualcomm is doing the comparisons with Apple Silicon has been giving me headaches, both at the time of the initial unveiling and at today’s controlled-environment press hands-on event. 
 

It’s a constant game of apples to oranges comparisons and deliberate confusion.

 

First of all, what even is the equivalent Apple chip to compare against the Qualcomm Snapdragon X Elite (or SxE)? The M2, the M2 Pro or the M2 Max? Or maybe more appropriately the actual Apple chips for 2024 (M3, M3 Pro and M3 Max, being released tonight)?

 

The SxE is a 12 P-core homogeneous design, with no E-cores.

 

Additionally, the SxE, compared to Apple’s chips, is heavily skewed towards allocating its transistor budget (or die area if you will) to the CPU section of the SoC, and has a comparatively smaller GPU. Apple’s M3 chips will run circles around the SxE in terms of GPU, roughly with the M3 being 1x the SxE, the M3 Pro being 2x and the M3 Max being 4x in terms of GPU results, not to mention they’ll most certainly have hw ray tracing, which the SxE lacks (edit: or lacks temporarily, apparently).

 

The best way to approximate what the SxE is ballpark-comparable to could be “the CPU of an M2 Max in MT terms + the CPU of an M2 Pro in ST terms + the GPU of a base M2”. (Or swap “M3” for “M2” after tonight’s M3 unveiling)

 

Hence:

- comparing the ST performance to the M2 Max is theatre, ‘cause you might as well compare it to an M2 Pro or even a well cooled M2, it’s not like ST changes much accross the M2 family, but saying “faster in ST than the M2 Max” sounds more impressive

- comparing the MT performance to the humble base M2 (with its 4p+4e cores, vs 12p cores in the SxE) is…what is it, even? Really?

- comparing the GPU performance to the humble base M2…is cherry picking the only M2 chip the SxE can beat in terms of GPU. (Probably the base M3 tonight will take away even that)


I didn’t think Qualcomm picking the M2 family for their comparisons could get anymore irrelevant and any funnier, but apparently it will: Bloomberg reports we’re getting M4 Macs this fall already. 
 

This means that, save for like the first 5 months from June to October, these Qualcomm chips we’ll go against the M4.

 

Benchmarked against the M2.

Actually going against the M4.

 

That’s the magic of previewing stuff 9 months in advance, mere hours before the M3 was released. 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 10/30/2023 at 7:03 PM, saltycaramel said:

The confusing (and, let’s admit it, intentionally misleading) ways Qualcomm is doing the comparisons with Apple Silicon has been giving me headaches, both at the time of the initial unveiling and at today’s controlled-environment press hands-on event. 
 

It’s a constant game of apples to oranges comparisons and deliberate confusion.


Sometimes my instinct is right 
 

https://www.semiaccurate.com/2024/04/24/qualcomm-is-cheating-on-their-snapdragon-x-elite-pro-benchmarks/
 

Concerning.

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/30/2023 at 6:46 PM, JTuyen said:

Desktop ARM is truly exciting, just gotta hope that Microsoft fixes Windows on ARM. I'm also interested to see how gaming on Linux ARM will work. We can kinda see this with the current open-source work on OpenGL for Apple Silicon, but definitely needs work for an x86 translation layer, or Steam and/or Vulkan support.

Microsoft is "fixing" Windows for ARM for the last 15 years... If you think it'll suddenly miraculously get better, you're an hyper optimist.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, saltycaramel said:

lol

Why is Charlie always so salty? He calls Qualcomm "pathetic" three times in two paragraphs. His style of writing always comes across to me as a whiney teenager. It doesn't help that he still calls Windows on ARM "WART", a phrase he himself coined in 2012 and nobody else uses. 

For someone who goes on about how the information provided by Qualcomm is "fluff" or "blurry", he himself is extremely light on details. For example he claims less than 50% of the performance score that Qualcomm showed, but he doesn't state which benchmark(s). He claims that the numbers are not "achievable with the settings they claim", but then never elaborates on which settings he is referring to.

 

I always take what Charlie says with a few shovels of salt. Let's see if he is exaggerating like he tends to this time as well, or if what he says is true.

Will we get less than half of the performance Qualcomm stated in their presentations? 

 

 

Maybe Semi Accurate would be taken more seriously and get more favorable treatment (which they seem to really want) if they didn't do things like:

"plan to ask Qualcomm about their cheating on benchmarks". They don't exactly come across as open-minded or friendly if they go into an interview with the mindset that they will expose lies and put someone in their place.

It's kind of like if I said "I am going to interview Linus and ask him why he is such a fucking dickhead", and they get surprised if he doesn't want to answer my questions. 

Charlie even admits that he is mad at Qualcomm in the other article he links to. It's hard to write objective articles when you are biased and out for blood because you feel personally wronged by a company.

 

 

Even if what Charlie says is correct, he would be a way better writer/blogger if he cut the attitude. The whole "I don't a fuck and I am a badass" doesn't exactly give an impression of a rational and trustworthy person, at least not in my eyes. 

 

 

 

 

 

 

On 4/12/2024 at 7:30 PM, saltycaramel said:

I didn’t think Qualcomm picking the M2 family for their comparisons could get anymore irrelevant and any funnier, but apparently it will: Bloomberg reports we’re getting M4 Macs this fall already. 

They compared it to the M2 because the M3 wasn't out. Apple announced the M3 just a few days after the X Elite SoC was announced.

I think it will be interesting to see how the X Elite stacks up against whatever Apple offers at the time. My guess is that the Apple chip will offer better single-core performance, but lower multi-core performance. GPU-wise I wouldn't be surprised if Qualcomm gets the edge too, like they have on their phone SoCs.

Apple will still have a massive lead on software though. So even if (and that's a very big if) the X Elite is better than the M-chip chances are it still won't be as good as the experience is on a Mac. 

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, LAwLz said:

They compared it to the M2 because the M3 wasn't out. Apple announced the M3 just a few days after the X Elite SoC was announced.


I would think the timing of the soft unveiling of Qualcomm’s big multi-year high performance chip endeavour wasn’t left to chance, I would assume the time was picked carefully and I suspect (but have no way to prove it, yet I find it funny that the launches were that close, not even 2 full days, except the M3 was a real launch and Qualcomm was a soft launch) they did anything in their power to unveil it before the M3. (btw Apple sent out the invites for the obviously-M3-event on October 24th, 6 days before Qualcomm’s unveiling)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, saltycaramel said:

I would think the timing of the soft unveiling of Qualcomm’s big multi-year high performance chip endeavour wasn’t left to chance, I would assume the time was picked carefully and I suspect (but have no way to prove it, yet I find it funny that the launches were that close, not even 2 full days, except the M3 was a real launch and Qualcomm was a soft launch) they did anything in their power to unveil it before the M3. (btw Apple sent out the invites for the obviously-M3-event on October 24th, 6 days before Qualcomm’s unveiling)

Apple sent out invites for their event on October 24.

Qualcomm announced its event on June 2.

 

I find it very hard to believe that Qualcomm rushed out their presentation so that they could get ahead of the M3 launch. Their event was planned long in advance. It's not like they could have rushed reporters and the media to Hawaii with a short notice. 

Seems more like a coincidence to me.

Link to comment
Share on other sites

Link to post
Share on other sites

>Graphics card designed by ATi and licensed by AMD is faster than another AMD graphics card

 

This is not the first time in history an iGPU has surpassed the performance of a recent dGPU. Nvidia ION iGPU surpassed several low-end GeForce 6000/7000 mobile chips (and all of the MX4 mobile series) and that was '08. Similarly, AMD's first true integrated graphics chip on the Turion 64 x2 line beat out cards from the Radeon 9000 series. 

 

I feel like this achievement is being marketed as something completely revolutionary when it's not quite so.

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, da na said:

>Graphics card designed by ATi and licensed by AMD is faster than another AMD graphics card

 

This is not the first time in history an iGPU has surpassed the performance of a recent dGPU. Nvidia ION iGPU surpassed several low-end GeForce 6000/7000 mobile chips (and all of the MX4 mobile series) and that was '08. Similarly, AMD's first true integrated graphics chip on the Turion 64 x2 line beat out cards from the Radeon 9000 series. 

 

I feel like this achievement is being marketed as something completely revolutionary when it's not quite so.

Pretty notably as well, Intel HD 3000 (Sandy Bridge) matches or beats out the then-current Radeon HD 5450, and outpaces the GeForce 310. This kind of killed the bottom rung stuff in short order. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

Here I was joking with people when I first entered this forum, that i'm Snapdragon CPU-chan.

Now is Snapdragon the real shit with beating every single laptop CPU out in the market but I have predicted this happening ages ago.

All those sales from phones... yes. This is probably quadruple times faster than the 4 core Alder Lake N100 found on my Asus Prime ITX thingy.

Link to comment
Share on other sites

Link to post
Share on other sites

So to get back to Charlie's claims of Qualcomm lying and the Elite X only being able to achieve about "far sub-50%" of the performance Qualcomm claims.

So let's go back and look at the performance claims Qualcomm made during the announcement, and then let's write down what performance numbers Charlie claims we will get in the final product (below half). It will be interesting to see who is closest with their numbers once we start seeing hardware in reviewer and consumer hands. My guess is that the official numbers from Qualcomm will be far closer than the numbers Charlie claims we will get.

 

If I had to guess, Charlie will backpedal and cherry pick some very specific benchmarks that Qualcomm never even mentioned. Like maybe he will take one benchmark where Qualcomm compared their chip to an AMD chip and the Qualcomm chip won by 10%. Then Charlie will find a completely different program, benchmark than, and in that test the Snapdragon might lose a lot, so Charlie will say Qualcomm lied because "they said the Snapdragon was faster than the AMD chip, but it isn't in this test".

Or maybe he will run non-arm native benchmarks and the performance will be worse, which would be fair. But that doesn't mean Qualcomm lied when they showed their performance numbers using arm-native software.

 

80-watt TDP config:

Spoiler

Geekbench 6.2:

Single-core performance - ~2 950

Multi-core performance - ~15 200

 

Notepad++ compiled in Visual Studio: 25.72 seconds 

 

Cinebench 2024

Single-thread - 131

Multi-thread - 1220

 

PCMark 10 applications: ~12 900

 

GFXBench Aztec Ruins (normal) - 355 FPS

3DMark Wildlife Extreme - 44 FPS

 

 

 

 

 

With the 23-watt TDP config the numbers look like this:

Spoiler

Geekbench 6.2:

Single-core performance - ~2 750

Multi-core performance - ~13 900

 

Notepad++ compiled in Visual Studio: Unknown

 

Cinebench 2024

Single-thread - 122

Multi-thread - 950

 

PCMark 10 applications: ~13 000

 

GFXBench Aztec Ruins (normal) - 295 FPS

3DMark Wildlife Extreme - 39 FPS

 

 

Does anyone believe that the numbers we get in the final products are around half of this? Because that is what Charlie claims.

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, LAwLz said:

Does anyone believe that the numbers we get in the final products are around half of this? Because that is what Charlie claims.

I wouldn't doubt in select models for ARM-native applications (something stupid like that superthin macbook Apple released years ago), but wouldn't expect that to be the case in most. For the good models, in x86 emulation, I am expecting something as bad as 50%.

Link to comment
Share on other sites

Link to post
Share on other sites

didnt they already tried like a long while ago but it was a flop?

so they're back now

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, 12345678 said:

didnt they already tried like a long while ago but it was a flop?

so they're back now

Qualcomm has taken a couple of tries at making Snapdragon chips for PCs. This is definitely more promising, though, since it sounds like the company has seen what Apple did and followed suit.

 

With that said, Qualcomm is definitely thirsty here, so I'll be cautious about the Snapdragon X Elite (and X Plus) until we have some testing beyond the company. I don't think any performance gap would be quite as cavernous as some fear, but it might not be an M3 buster in practice. The big question is whether or not Windows and apps have improved to the point where ARM truly feels quick and well-supported.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Forbidden Wafer said:

I wouldn't doubt in select models for ARM-native applications (something stupid like that superthin macbook Apple released years ago), but wouldn't expect that to be the case in most. For the good models, in x86 emulation, I am expecting something as bad as 50%.

I am not sure what you are saying. 

What are "good models", and 50% compared to what?

 

The claim is that Qualcomm are lying in their benchmarks. That the numbers they presented were "baked", as in made up. The programs they ran were arm-native, as you can see by my list.

 

It would be really stupid to say they lied in their benchmark and then run different benchmarks and go "see, I get different numbers". Reporting the results in a benchmark is not lying just because the numbers from a different benchmark aren't the same.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/12/2024 at 7:30 PM, saltycaramel said:

This means that, save for like the first 5 months from June to October, these Qualcomm chips we’ll go against the M4.

 

Benchmarked against the M2.

Actually going against the M4.

 

That’s the magic of previewing stuff 9 months in advance, mere hours before the M3 was released. 

 

BREAKING NEWS: “strong possibility” that Apple’s M4 (codename T8132) will be unveiled on May 7 at the “Let loose” iPad event, Bloomberg’s Mark Gurman says.

 

So the M4 will literally be on shelves before these Qualcomm chips. And I thought I was joking last year when I wrote this:

 

On 11/1/2023 at 7:33 AM, saltycaramel said:


(by the way, the M3 chips will be in customers’ hands next week…at this pace the actual year-for-year competitor of the “mid 2024” Snapdragon X Elite may end up being the M4 😄)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Anyone tried Linux on these ARM laptops? How does that work given that Windows on ARM is absolute poop? Is Linux on ARM processors any good? I'd totally have ARm laptop with Ubuntu on it or something.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×