Jump to content

Leaked MacBook Air GB5 benchmark shows score higher than 16-inch MacBook Pro; SC higher than 5950X

Go to solution Solved by Spindel,

*DISCLAIMER* All pictures below are stolen from Affinity forum. 

 

Since Apparently Geekbench is bad let's look att Affinity benchmark

 

This is a i9-10900 with a RTX 2070 Super

image.png.2f5c0203504a50b8fa961dd8318a10ff.png

 

 

 

This is a 3900X with a GTX 1080

image.png.7695f37d1eb96d2bd2758a053ca0d179.png

 

 

This is the M1

image.thumb.png.0e7353cdcc881f86e582110920f779c5.png

 

 

33 minutes ago, Brooksie359 said:

I am just pointing out that for most people who use laptops its for portability reasons and not because of lack of an outlet. If you have to bring your computer with you then yeah a laptop makes sense but often times you still are going to be able to plug your laptop in at your destination or at least at some point before you would require 20 hours of battery life. I am not saying that there are zero cases where people require their laptop to last them 20 hours without access to an outlet i am just saying such cases are rare. Yes laptops are more common for consumers than desktops but that isn't because they don't have access to an outlet and they are intending to run their computer off the battery but rather for portability. Sure better battery life is great if you don't have to give anything up for it but that isn't really the case here because you are giving up the x86 platform for said battery life. 

It's not the lack of availability of plug points. But sometimes it's inconvinient. For example if I want to work on my bed, or a table thats placed in the middle of the room. Rather than looking for extension cords or stretch a charger across the room, I can use it for longer on battery at a time. Sure, if I wanted to do some heavy video editing or 3d work, sure Ill go find a place with decent access to outlet, but for what most of us do 80% of the time on our PC, the extra batery life means a lot. There's literally no cons to having this ability to do so.

 

From the way you say it, I feel like you have't really used a laptop with good battery life.

Link to comment
Share on other sites

Link to post
Share on other sites

If your happy go buy it dont let anyone stop you. But dont expect anyone outside apple marketing bubble not to sit back and wait.

 

There has been many attempts at doing same thing,

 

microsofts arm surface

chromebooks

 

But these were at least honest and didnt try to replace all products knowing the limitations and strengths of arms architecture

 

I suspect some people will find out quickily the hard way once the product is released what its good for and what it isnt.

 

the fact photoshop isnt available on launch day should give everyone pause on what workloads will work well or better on this new platform for apples laptops

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, tech.guru said:

You can not use a bench mark to compare processor of different architectures.

Why not?

What do you suggest instead?

 

 

22 minutes ago, tech.guru said:

A lot of those tasks you mentioned arnt even performed exclusively on a CPU today. CUDA and OpenGL on AMD and nvidia do these AI. GPU can accelerate encoding tasks and even some cpu with integrated graphics using quicksync which are faster than CPU only encoding. An laptop with CUDA graphics card would no doubt crush those scores for relevance.

And why do you feel like anything you just said matters?

Just because you can do video encoding on a GPU doesn't mean it isn't a valid CPU test as well.

Just because you can do some AI tasks on a GPU doesn't mean it isn't a valid CPU test as well.

 

 

23 minutes ago, tech.guru said:

The x86 is general purpose processor and cherry picking some tasks not optimized for x86 isnt going to help arm win.

I am going to request evidence that the tasks I mentioned are not optimized for x86. I find that claim completely and utterly unfounded and just a way for you to save face.

 

24 minutes ago, tech.guru said:

In addition people do many things at once on a modern computer not one task and this is where arm has traditionally struggled.

Can you provide some evidence or at the very least clarification on what you mean?

You are being extremely vague right now and I suspect that is because you have run out of arguments.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, tech.guru said:

If your happy go buy it dont let anyone stop you.

But dont expect anyone outside apple marketing bubble to sit back and wait.

 

There has been many attempts at doing same thing,

 

microsofts arm surface

chromebooks

 

But these were at least honest and didnt try to replace all products knowing the limitations and strengths of arms architecture

 

I suspect some people will quickily find out the hard way once the product is released what its good at and what it isnt.

If you're replying to someone, there is quote feature so they will get notified

 

Ahem, Apple has in house software and hardware. They're the only ones who can sucessfully pull this off. And if you didn't know, they've done this in the past. 

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Spindel said:

I hope I'm not breaking any rules by quoting myself...

It's amazing how quickly they arrived at the "benchmarks doesn't matter" point, right?

I thought it would be at least a week we started seeing that kind of argument thrown around.

 

I also like how @tech.guru went from "this is only good for watching youtube videos, if you wanna do actual work get a real computer" to "well a GPU is better for real work so CPU benchmarks doesn't matter anyway!" in just a single post.

Link to comment
Share on other sites

Link to post
Share on other sites

Some people are missing the “SC” (single core) part of the topic title when referring to the 5950X. Nobody said the M1 is better than the 5950X in full blown MC. It could very well be up to 4 threads.

 

In 4-6 months we’ll likely see (on shelves, not announced) two more CPUs from Apple, a 12-core (8 big cores) for laptops and then apple’s first desktop CPU, the 16-core M1T for the new iMac, with 12 big cores.

 

If things scales up consequently, what are you people gonna say then?

 

The clock is ticking. 

120-180 days and it’ll be game over.

No place to hide, no place to run.

What’s the next move in the x86 camp to counter all of this?

Are they willing to give up all the consumer and small pro market?

Or to “survive” in the mediocrity of “what’s the point of all this speed anyway” like the android market?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, LAwLz said:

It's amazing how quickly they arrived at the "benchmarks doesn't matter" point, right?

I thought it would be at least a week we started seeing that kind of argument thrown around.

 

I also like how @tech.guru went from "this is only good for watching youtube videos, if you wanna do actual work get a real computer" to "well a GPU is better for real work so CPU benchmarks doesn't matter anyway!" in just a single post.

HA! The amount of brain dead anti-apple in this forum is amazing

 

Simply cant celebrate great strive in semi conductor technology, simply because it was made by apple

Link to comment
Share on other sites

Link to post
Share on other sites

Also, ffs, the M1 is a 15W CPU in a laptop.

And some people are not impressed by it beating a fully cooled 5590X even if just in SC.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, LAwLz said:

It's amazing how quickly they arrived at the "benchmarks doesn't matter" point, right?

I thought it would be at least a week we started seeing that kind of argument thrown around.

 

I also like how @tech.guru went from "this is only good for watching youtube videos, if you wanna do actual work get a real computer" to "well a GPU is better for real work so CPU benchmarks doesn't matter anyway!" in just a single post.

I went and said reality is anyone going to do AI on a 15w arm processor in a laptop?

It puts out a number great, but real life is you have a problem your trying to solve and AI can be complex.

 

So if it takes 1555 days versus 2000 days to finish how important is that benchmark?

if a graphics card can complete it much fast down to a week would you ever run it on the arm chip in the first place?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, saltycaramel said:

Also, ffs, the M1 is a 15W CPU in a laptop.

And some people are not impressed by it beating a fully cooled 5590X even if just in SC.

And we're talking about the same audience, including Linus who gets excited and starts making RIP Intel memes when an equivalent AMD chip has 20% improvement of performance over Intel 😂

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, LAwLz said:

Why not?

What do you suggest instead?

 

 

And why do you feel like anything you just said matters?

Just because you can do video encoding on a GPU doesn't mean it isn't a valid CPU test as well.

Just because you can do some AI tasks on a GPU doesn't mean it isn't a valid CPU test as well.

 

 

I am going to request evidence that the tasks I mentioned are not optimized for x86. I find that claim completely and utterly unfounded and just a way for you to save face.

 

Can you provide some evidence or at the very least clarification on what you mean?

You are being extremely vague right now and I suspect that is because you have run out of arguments.

Suggest you educate yourself on differences outside your bubble

 

https://www.androidauthority.com/arm-vs-x86-key-differences-explained-568718/

 

Yes even super computers use arm but its for certain computation tasks

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Blademaster91 said:

geekbench has a browser based benchmark, but whether its browser based or not geekbench is about as irrelevant as it gets for a benchmark that can realistically compare processors.  I get that you're desperate to want to say the M1 beats absolutely everything but I'm not going to believe the apple marketing hype.

Holy crap it is incredible how many things you can get wrong in a single post.

1) GeekBench does not have a browser based benchmark.

2) GeekBench is not irrelevant as a benchmark. If you think it is then I want you, in your own words, to explain to me why it is irrelevant. 

3) I don't believe Apple's marketing hype. I believe the independently tested, third party numbers we got from the Apple A14.

4) I am not desperate to say the M1 beats everything. But I live in the real world and my words reflect what is true.

 

 

22 minutes ago, Blademaster91 said:

Up to 4 cores, except the topic comparing it to a 16 core/ 32 thread cpu.

And if you read more than the thread title you will see that in literally the third sentence they say it is faster in single core score.

The 16 core AMD 5950X gets over twice the multicore score (around 16,000 vs Apple's 7,500).

 

 

26 minutes ago, Blademaster91 said:

Except the M1 is being compared to a different class device, then it's fair enough to run programs that scale on the 5950X.

I guess that depends on how you look at it.

If you only look at the title and think it is trying to say "an M1 is better than the 5950X" then yes I can understand why you are upset.

The way I look at this thread is that we have some solid numbers on how well Apple's CPU architecture performs and there is a lot of data to be gathered from that. Instead of comparing specific products I look at this data as comparisons of the overall architectures and how they perform.

A Firestorm core at 3.2GHz matches a Zen3 core at 4.9GHz is the important info we should focus on.

That the Zen3 core it was compared to happen to be in a 5950X is fairly irrelevant.

 

 

31 minutes ago, Blademaster91 said:

Then those people don't need single core performance at all, those people more likely need bootcamp, and giving up x86 compatibility is still a downside even on a 13" laptop.

Why?
What programs do you think the average Macbook user use that requires bootcamp, and what programs do those same users use that don't benefit from single core performance?

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, tech.guru said:

Suggest you educate yourself on differences outside your bubble

 

https://www.androidauthority.com/arm-vs-x86-key-differences-explained-568718/


The most accurate way to describe the Apple Silicon Macs is not just “an ARM PC” but a personal computing console based on fully custom silicon comprising

- a custom Apple CPU based on the ARM ISA (not an ARM designed CPU)

- Apple’s own GPU

- Apple’s own Neural Engine

- Apple’s own ML accelerators

- Apple’s own storage controller

- Apple’s own ISPs

- and a bunch of other stuff

 

and more pieces can be added to the mix every year, who’s gonna stop apple from adding other dedicated silicon for whatever function and supporting it in macOS?

 

So it’s not just about the CPU, the ARM ISA and whatever BLANKET assumption you have about “ARM”. This is fully custom as custom can get. Even the ISA itself has probably custom stuff added to it.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, tech.guru said:

If your happy go buy it dont let anyone stop you. But dont expect anyone outside apple marketing bubble not to sit back and wait.

Oh look, the classic "buy it if you want" or "don't buy it if you don't like it". The same arguments that always gets brought up when someone has run out of reasons for defending or attacking a product...

"Oh shit I got proven wrong and I can't argue back. I'll just say that they can buy it if they want and then they'll leave me alone and let me keep my unreasonable stance on this".

 

 

21 minutes ago, tech.guru said:

There has been many attempts at doing same thing,

 

microsofts arm surface

chromebooks

What Microsoft did was shit for a wide variety of reason. Both software and hardware related. None of that is an issue for Apple.

Chromebooks have been successful going to ARM but they are a completely different product category compared to this.

 

 

22 minutes ago, tech.guru said:

But these were at least honest and didnt try to replace all products knowing the limitations and strengths of arms architecture

Can you please with your own words describe what these limitations and strengths of the ARM architecture are?

Again, you are being very vague and my suspicion is that you are vague because you don't know what you are talking about. 

 

 

25 minutes ago, tech.guru said:

I suspect some people will find out quickily the hard way once the product is released what its good for and what it isnt.

Got any examples you can share with us?

 

 

25 minutes ago, tech.guru said:

the fact photoshop isnt available on launch day should give everyone pause on what workloads will work well or better on this new platform for apples laptops

Are you implying that the reason why Photoshop isn't available on launch day is because ARM is somehow unsuitable for Photoshop and Photoshop-like workloads?

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, tech.guru said:

I went and said reality is anyone going to do AI on a 15w arm processor in a laptop?

It puts out a number great, but real life is you have a problem your trying to solve and AI can be complex.

 

So if it takes 1555 days versus 2000 days to finish how important is that benchmark?

if a graphics card can complete it much fast down to a week would you ever run it on the arm chip in the first place?

So let's go through your posts again.

Firs you said this was only good for watching Youtube videos and people who wanted to do "actual work" would get a "real laptop".

You never specified what "actual work" was or what classifies as a "real laptop" but I went ahead and posted a variety of different benchmarks I think people would classify as "real work" and where Apple's CPUs are performing very well.

I think those benchmarks covers a wide range of different applications and can be used to make generalized statements regarding the M1, and most of those generalized statements are "this is a really good CPU, even compared to Zen3 and Skylake".

 

After this you resorted to saying a couple of those benchmarks I posted would be better off being computed on a GPU and therefore they don't count I guess?

 

 

Again, I posted a wide variety of benchmarks because it helps us make generalized statements about products. That's what benchmark suits are for.

Nobody here is making the argument that the M1 is a great chip for AI because it performs well in a single "Go" benchmark. What people are saying is that if it performs this well in 12 very different benchmarks then it is safe to say that it will perform that way in most benchmarks and workloads.

 

 

If you want an analogy let's think of gaming graphics cards.

Let's compare the RTX 3070 and the RX6800. If we compare both of those graphics cards in 12 different games and in all games they get more or less the same FPS, then it is safe to say they are equal in performance. That if you pick any game not including in those 12 tests, you will get similar results.

If two graphics cards performed the same in Battlefield, COD, Fallout and DOOM, then it is very likely that they will perform similarly in let's say Cyberpunk as well.

 

That is what is happening here.

We are using things like AI benchmarks not because we think someone will use the M1 chip to run a bunch of AI simulations. We used a couple of AI simulations to get a wide variety of workloads for when generalized statements and assumptions are being made.

If processor X performs 10% better than processor Y in 12 different benchmarks which all have different characteristics, then it is safe to say processor X will perform around 10% better than processor Y in most programs, even those not included in those 12 benchmarks.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, tech.guru said:

Suggest you educate yourself on differences outside your bubble

 

https://www.androidauthority.com/arm-vs-x86-key-differences-explained-568718/

1) I want your own words, not you parroting what you read someone else say.

2) If you are going to just post a link then at least quote the parts you think are relevant.

3) I have already read that article before and I don't understand how it prove anything you have said, or disproves anything I have said.

4) You didn't answer any of my questions. I would like if you did that.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, saltycaramel said:

So that’s not just about the CPU, the ARM ISA and whatever BLANKET assumption you have about “ARM”. This is fully custom as custom can get. Even the ISA itself has probably custom stuff added to it.

I would just like to add to your comment that Apple has not strayed too far away from the ISA.

The only change I am aware of is the inclusion of AMX instructions. That is not in ARMv8 but the A13 from Apple supports it (not sure about the A14 or M1).

But it's not widely used and I do believe Apple has hid it from developers.

 

 

Making changes to the ISA is a very bad idea for code portability. I don't think Apple would be willing to stray away from it, and I am not even sure ARM allows it since they would risk losing control of their own ISA if everyone started doing their own thing. The AMX instructions in the A13 (and maybe A14, M1) are probably some special deal or experiment.

 

 

Everything else about the chip is custom though. Especially the CPU cores which are not at all like the core designs from ARM used in other ARM devices.

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, LAwLz said:

And if you read more than the thread title you will see that in literally the third sentence they say it is faster in single core score.

The 16 core AMD 5950X gets over twice the multicore score (around 16,000 vs Apple's 7,500).

What's interesting with this is that a CPU with 300 % more full sized cores and a power draw of 200ish watts (about 900 % more) only gets bit more than  110 % better performance. 

 

But for arguments sake to not discount the little cores on the M1 lets say it only has 200 % more cores.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Spindel said:

What's interesting with this is that a CPU with 300 % more full sized cores and a power draw of 200ish watts (about 900 % more) only gets bit more than  110 % better performance. 

 

But for arguments sake to not discount the little cores on the M1 lets say it only has 200 % more cores.

Considering it is big little design and only 4 cores are working at a time I would say that calling it a quad core would be accurate. Also is just 300% including SMT which the M1 doesn’t do? 

Dirty Windows Peasants :P ?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Lord Vile said:

Considering it is big little design and only 4 cores are working at a time I would say that calling it a quad core would be accurate. Also is just 300% including SMT which the M1 doesn’t do? 

I didn't count fake cores (which SMT/HyperThreading is), just physical cores. I have no idea if AS has an equivalent. And I just assumed that as long as the temperature is OK the M1 could run all cores at once. I might be wrong on that. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Spindel said:

I didn't count fake cores (which SMT/HyperThreading is), just physical cores. I have no idea if AS has an equivalent. And I just assumed that as long as the temperature is OK the M1 could run all cores at once. I might be wrong on that. 

The little cores are only for background tasks and light work 

Dirty Windows Peasants :P ?

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Lord Vile said:

Considering it is big little design and only 4 cores are working at a time I would say that calling it a quad core would be accurate. Also is just 300% including SMT which the M1 doesn’t do? 

Correct me if I am wrong, but I do believe Apple has "global task scheduling" in their chips, which means all cores can be mixed and matched however they want, including using all 8 at the same time.

 

The little cores are barely worth mentioning though because they are so slow. The big cores are like 4-5 times as powerful. So if all 8 cores are being used it's still only like having "5 big cores" in Apple's case, at best.

 

This is what Wikipedia has to say about the scheduling

Quote

The most powerful use model of big.LITTLE architecture is Heterogeneous Multi-Processing (HMP), which enables the use of all physical cores at the same time. Threads with high priority or computational intensity can in this case be allocated to the "big" cores while threads with less priority or less computational intensity, such as background tasks, can be performed by the "LITTLE" cores.[10][11]

 

This model has been implemented in the Samsung Exynos starting with the Exynos 5 Octa series (5420, 5422, 5430),[12][13] and Apple A series processors starting with the Apple A11.[14]

 

I think it's safe to say that Apple can address all 8 cores at once.

Link to comment
Share on other sites

Link to post
Share on other sites

Geekbench is meaningless for ARM vs x86 comparisons, it heavily favors the former.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, LAwLz said:

Correct me if I am wrong, but I do believe Apple has "global task scheduling" in their chips, which means all cores can be mixed and matched however they want, including using all 8 at the same time.

 

The little cores are barely worth mentioning though because they are so slow. The big cores are like 4-5 times as powerful. So if all 8 cores are being used it's still only like having "5 big cores" in Apple's case, at best.

 

This is what Wikipedia has to say about the scheduling

 

I think it's safe to say that Apple can address all 8 cores at once.

But the small cores aren’t used for high computational loads so aren’t assigned for them, they just chug on with background task. 

Dirty Windows Peasants :P ?

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, saltycaramel said:

 

You have a point about eGPUs.

I wonder if support will come back eventually or if we’ll see Apple eGPUs based on the upcoming Apple dGPU (codename: “Lifuka”).

Or even external specialized cards like the Afterburner in the MacPro. 

 

That said, we need to admit that while eGPUs may be important to some (and Apple itself sent a mixed message by introducing the feature in March 2018 and effectively beginning to kill it in November 2020 with these new Macs, only 2 years and a half later), they’re not THAT prevalent that the lack of them can be considered a non-starter for a platform. 

 

As for the software, I see it as a temporary nuisance. It’s gonna be fine, a number of companies (including Adobe and MS) have committed to release native versions soon, there will be more support than Windows_ARM could ever hope for. In the meantime there’s the Rosetta2 translation layer. 

 

 

 

It's Apple so they'll probably force users to buy a proprietary eGPU if they need any GPU power.

While most people didn't have a use for an eGPU there was enough people using them for eGPUs to even be supported, now those people are left with a GPU they can't use so it's definitely a problem.

I don't see the benefit of dropping $1400 on a laptop then have to run iPad apps because the support is going to be way more limited because everything is going to have to be approved by apple, and no bootcamp support which is important for plenty of people that need to run 3D applications.

5 hours ago, saltycaramel said:

Do you people watch Linus just to know what’s up today or you expect him to also have good and interesting insights about the near future?

 

You can’t defend the “3 ipads” take, c’mon.

That’s like saying the Earth is flat because you can’t see beyond the horizon.

Linus is supposed to climb on the highest lighthouse and report to us that the Earth is, in fact, round. 

 

That’s a “fun and edgy” initial take, that’s ok, but that’s it.

I trust Linus more than some apple biased source drooling over a synthetic benchmark that doesn't mean anything for real world use.

3 hours ago, saltycaramel said:

 

“tech.guru”

You have no counter argument so you attacked them instead.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×