Jump to content

Base model Apple M2 MacBook Pro SSD Up To 50% slower than M1 MacBook Pro SSD | Half the NAND chips, half the speed

AlTech
2 hours ago, 05032-Mendicant-Bias said:

Apple marketing material compared the M1 ultra with the RTX3090/6900XT, the comparison is invalid becausse the M1 ultra can't do what the RTX3090 can in most application.

It can't do it in the applications you want and are trying to do that aren't actually available or used in the Apple ecosystem. It's like you're trying to judge the value of an F1 race car as a long distance touring car. Leave the F1 car on the race track and it's the best thing out there 😉

 

Did you take Nvidia claim about the RTX 3090 Ti being the fastest gaming GPU and then benchmark it using a FP64 memory intensive workload then figure out an Intel Xeon CPU is faster then conclude Nvidia was lying? Or is the test methodology itself the flaw and issue?

 

I hate Apple marketing but anyone with even a small amount of common sense knows what the material is about, probably even cited at the bottom of the image since Apple did start doing that, that the workloads used were something like Final Cut Pro etc.

 

If you have a M1 SoC of any kind then you have a Mac device and are running Mac OS and will be using Mac OS centric applications using Mac OS API's and software frameworks and will be achieving all the performance claims Apple laid out. Situations where this i not the case would be rare and very custom i.e. Data sciences (which might only require optimization to get significant performance increases, but that's still a very Nvidia world anyway).

 

And in case all of this is an issue just remember the same applies in the other direction, no matter how good M1 SoC's are in their ecosystem that doesn't mean that same performance will translate over to other platforms, workloads, software stacks etc.

 

2 hours ago, 05032-Mendicant-Bias said:

If a gamer see that chart and is in doubt between and buying RTX3090 rig and an M1 ultra, that's misleading.

A gamer would 100% never be mislead by it, any gamer would know Mac OS is useless for gaming thus M1 anything is not an option. Gaming is an on the side thing if it works and if it works well for Mac OS. That might change however the proposed scenario you put forward is itself misleading.

 

I hope you are aware that even with marketing it's not actually always necessary to state certain things that are obvious simply by nature of the audience watching. There are things you don't have to say, like for example if I were at an HPE server presentation I know everything is in the context of servers and server workloads and that entire market sector, spelling it out is itself insulting to the audience. 

 

You should be self aware and intelligent enough to figure out if something is actually applicable to you or not, most people actually are even when they don't have much understanding of what is being presented or talked about.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, leadeater said:

A gamer would 100% never be mislead by it, any gamer would know Mac OS is useless for gaming thus M1 anything is not an option. Gaming is an on the side thing if it works and if it works well for Mac OS. That might change however the proposed scenario you put forward is itself misleading.

Exactly this. I have a Mac. I play games on my Mac perfectly happily with the M1Max SoC. Would I recommend somebody to buy a Mac if I know gaming is one of their primary use cases for buying a computer? Absolutely not.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

Comparing the GPUs right now is just too problematic in my opinion. All the examples of Nvidia/AMD GPUs being significantly faster are all examples of applications/usages the Apple GPU simply cannot do due to it being an entirely different software platform. Mac Pro with Radeon GPU vs M1's really is the only fair comparison because you are comparing hardware on equal software footing using the same Operating System using the same graphics and compute API layers.

 

Just imagine the amount of crap either Nvidia, AMD or a reviewer would get if they compared/reviewed competing GPUs and merely did one using DX12 only and the other Vulkan only, and that's not even "as bad" as the Apple situation regarding GPUs.

That's a fair point as well.

Although I think that is only really relevant for a strict hardware vs hardware comparison. If we are talking about the real world performance or the overall "experience", then I think it is fair to compare Tomb Raider running Metal vs Tomb Raider running DirectX or whatever. 

But that is another reason why it is important to specify exactly what you are talking about when making statements regarding the M1, since "M1" can refer to the whole hardware package, the combined software and hardware experience, the GPU, the CPU, the memory, or any combination of the individual components that all make up the M1.

 

 

6 hours ago, 05032-Mendicant-Bias said:

I disagree. While I do not demand that any product is good in all situation, I demand that marketing material accurately list the use cases where the claimed performance are achieved.

 

Apple marketing material compared the M1 ultra with the RTX3090/6900XT, the comparison is invalid becausse the M1 ultra can't do what the RTX3090 can in most application. If a gamer see that chart and is in doubt between and buying RTX3090 rig and an M1 ultra, that's misleading. I want buyers to buy the M1 Ultra for workflows where it's competitive with a AMD/Nvidia rig. I had the same problem when Intel suddenly claimed "benchmark" are not real world, when the disaster known as 11th gen was released.

Okay I think I understand your point. 

You dislike that Apple made a comparison against "highest-end discrete GPU" where the "relative performance" was the same at 200W less.

Personally, I don't give two craps about marketing slide from any company. They might be good as an initial estimate when you don't have anything else to go by, but as soon as third party tests becomes available I just toss as the first party stuff out the window.

 

I have seen a lot of people in this thread bring up "but Apple's marketing says or implies..." and I really don't get why. Why put any weight in what a company says about their own product? Just ignore it.

I totally agree with you that Apple's chart is misleading and does not tell the full story, but at the same time I think Apple's chart should not be taken seriously by anyone.

Apple's claims are most likely valid, but only for the particular workload they tested. Once we start looking at other workloads the results will be different.

 

 

The M1 and M2 have really good GPUs though. The fact that they are even being compared to discrete graphics cards is in my opinion feat in and of itself. Especially in an area like gaming where Apple has typically been way behind. It's basically one of the worst case scenarios and it still manages a great result. Granted, it is not a surprise that it is doing great because of the truckload of transistors Apple dedicated to it, and the much better N5 process node (vs Samsung's 8nm).

 

An integrated GPU that uses 1/3 the power of Nvidia's 3090, that offers 70-80% of the performance in games, is amazing. Not as amazing as the workload Apple did their tests with, but still really good.

 

I don't really think the chart would fool any gamer into buying an M1 Ultra over an RTX 3090 however. Whenever you see a benchmark you should always try and think what the numbers mean and how they might translate to other situations. There are a ton of ways to measure performance and the results from one test are not necessarily applicable to another test. If someone bases their entire purchasing decision on a generalized chart from a first party source then I don't really have that much sympathy for them. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, HenrySalayne said:

Did you watch the video in this post? I wouldn't say they blew the competition out of the water. I haven't found any detailed testing about it, but despite Apple's "more efficient" claims and the newer node, M2 seems to pull more power across the board compared to M1.

Which tests are you looking at?

 

Performance wise the M2 falls behind some of the higher core count parts from Intel and AMD, but you have to remember that the Apple part is the lowest end model and higher core count parts will become available later. You should also factor in power consumption. Yes, the M2 is 7-8% slower than the i7-1260P in Cinebench multithreaded workloads, but the i7 also uses more than 3 times the power (29 watts vs 8 watts). The story is the same when looking at single threaded performance. The 24 watt M2 is able to compete with the 54 watt i7 (which throttles down to 34 watts after a while but still).

 

It is hard to compare these things because we have to agree on which settings we should normalize for.

Should we equalize them at a specific performance tier and then look at power consumption?

Should we equalize them at a specific power consumption and look at performance?

Should we run some particular workload and look at power consumption and performance?

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, LAwLz said:

Performance wise the M2 falls behind some of the higher core count parts from Intel and AMD, but you have to remember that the Apple part is the lowest end model and higher core count parts will become available later. You should also factor in power consumption. Yes, the M2 is 7-8% slower than the i7-1260P in Cinebench multithreaded workloads, but the i7 also uses more than 3 times the power (29 watts vs 8 watts). The story is the same when looking at single threaded performance. The 24 watt M2 is able to compete with the 54 watt i7 (which throttles down to 34 watts after a while but still).

You are talking about single threaded workloads, there is basically not much difference in power draw at MC workloads and AMD is even slightly more efficient.

Funnily enough, the SC power draw at 8 W is one Watt more than the M1 Pro with the same R23 SC test. That's probably up to the inaccuracy of measurement, but MC seems to show the same picture (for M2 to M1 comparisons, not M2 to M1 Pro; but sadly I haven't found a test yet, only some rumours about the M2 running hotter and with more power compared to the M1).

 

https://www.techspot.com/review/2499-apple-m2/

12 minutes ago, LAwLz said:

It is hard to compare these things because we have to agree on which settings we should normalize for.

Should we equalize them at a specific performance tier and then look at power consumption?

Should we equalize them at a specific power consumption and look at performance?

Probably given power consumption and resulting performance. CPU TDP seems to be the a rough differentiating factor for laptop classes.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, HenrySalayne said:

You are talking about single threaded workloads, there is basically not much difference in power draw at MC workloads and AMD is even slightly more efficient.

Which test are you talking about?

Here are the Cinebench MT results.

 

M2 - 8740 at 24 watts.

Ryzen 7 6800U - 8264 at 19-30 watts and 9695 at 29-37 watts.

 

You have to remember that when AMD labels their part "15 watts", it doesn't actually use 15 watts of power. It uses 19 to 30 watts. When AMD labels their processor 25 watts, it uses 29 to 37 watts, even though Hardware Unboxed still lists it as 15 watts and 25 watts respectively when showing the performance scores.

 

 

I don't know how quickly the 6800U throttles down to what Hardware Unboxed labels "sustained", but my guess is that it does a fair bit of the Cinebench run at the much higher 30 and 37 watt marks.

If we assume an average of about 24 watts for the 15 watt configuration, then the M2 is about 6% ahead in terms of performance at the same power level.

If we assume an average of about 33 watts for the 25 watt configuration then the Ryzen 6800U is about 11% faster but uses 37.5% more power.

 

 

And that's when we are comparing the lowest end M2 vs one of the best Ryzen 6000 series chips. And that's not factoring in the higher single threaded performance as well, where the difference in power consumption is even larger.

M2 - 1580 at 8 watts.

6800U - 1449 at 19 watts.

 

Lower performance but uses more than twice as much power.

 

 

 

1 hour ago, HenrySalayne said:

Funnily enough, the SC power draw at 8 W is one Watt more than the M1 Pro with the same R23 SC test. That's probably up to the inaccuracy of measurement, but MC seems to show the same picture (for M2 to M1 comparisons, not M2 to M1 Pro; but sadly I haven't found a test yet, only some rumours about the M2 running hotter and with more power compared to the M1).

I don't really understand what you are trying to say.

Link to comment
Share on other sites

Link to post
Share on other sites

Just Josh had a review of the XPS 13 Plus that pointed out something important: Intel’s chips, at least, can still have trouble delivering sustained performance. Apple oversells itself in some ways, but with the right apps (media editing, mainly) it appears to offer better real-world results.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

Yes, the M2 is 7-8% slower than the i7-1260P in Cinebench multithreaded workloads, but the i7 also uses more than 3 times the power (29 watts vs 8 watts).

Do you mean single thread? Both those power figures are for single thread, 1260P no way only using 29 watts all cores loaded when its default boost is 64W and is often configured higher than that anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

Do you mean single thread? Both those power figures are for single thread, 1260P no way only using 29 watts all cores loaded when its default boost is 64W and is often configured higher than that anyway.

Yes, those numbers are for single threaded workloads. 

 

For multithreaded hardware unboxed reports 54 watts of peak power for the 1260P. Although it is worth mentioning they have subtracted the idle load as well. 

 

 

Not sure why I put the single threaded power numbers in when talking about the multithreaded performance. 

Anyway the point still stands. It gets like 92% of the performance at like 50% of the power. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

Although it is worth mentioning they have subtracted the idle load as well.

N00b question here. Why?

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Paul Thexton said:

N00b question here. Why?

By guess is that they are doing it to try and isolate the CPU power. Since they are measuring at the wall and are testing laptops. So by subtracting the idle power they are hoping to remove variables such as one laptop maybe having a much more power hungry display. 

 

 

If they got a laptop where the CPU uses 15 watts and the display uses 10 watts, and another laptop where the CPU uses 20 watts and the display 5 watts then both would say "25 watt during cinebench". But since they want to know CPU power usage they subtract display power from both. 

 

 

The drawback is that you inevitably end up subtracting more than just stuff like display and fans. This type of measurements end up rewarding systems that have poor idle consumption and punishes those that have good idle consumption. 

 

There are much better ways of measuring power consumption by the CPU, but they are also far more complicated to do. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

Since they are measuring at the wall and are testing laptops

Ah, I hadn’t picked up on that. Fair enough 👍🏻

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, LAwLz said:

There are much better ways of measuring power consumption by the CPU, but they are also far more complicated to do. 

No sure why power consumption of a cpu that that is not sold retail is of any interest to consumers. Power consumption should be of the entier package.

Even for desktop socketed systems you relay need to look at the entier package for power consumption as cpu vendors could select to move some power costly IO functions to the chipset so that in isolation the cpu compared to a competitor is 10W lower power draw but in a real system it ends up more as the attached chipset pulls unto 15W.   

The messumnt of 'only the cpu power' is such a pointless metric.  For laptops best is if you open them detach the battery close it back up and then run it from the wall with the provided charger (since this is part of the efficacy equation). 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, LAwLz said:

There are much better ways of measuring power consumption by the CPU, but they are also far more complicated to do. 

They are basically forced to do it this way on the Apple devices because SoC power isn't exposed, at least that's what I remember from the original M1 reviews. Intel and AMD you can just look at the actual package power value which makes things way easier.

 

Probing off the VRMs maybe not even be possible or practical either.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, LAwLz said:

The drawback is that you inevitably end up subtracting more than just stuff like display and fans. This type of measurements end up rewarding systems that have poor idle consumption and punishes those that have good idle consumption. 

And it might be offset by power supplied by the battery. The battery is an integral part of the power delivery in Macbooks and they will drain the battery for peak power needs. Hence, removing the battery will make a Macbook drastically slower*.

So this will just make a Macbook look bad:

9 hours ago, hishnash said:

The messumnt of 'only the cpu power' is such a pointless metric.  For laptops best is if you open them detach the battery close it back up and then run it from the wall with the provided charger (since this is part of the efficacy equation). 



*At least on Intel Macbooks. But seeing peak power draw for the M2 being 24 Watt while the average is 23 Watt, it's likely they're still tapping the battery.

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, HenrySalayne said:

*At least on Intel Macbooks. But seeing peak power draw for the M2 being 24 Watt while the average is 23 Watt, it's likely they're still tapping the battery.

You can detach the battery (in fact you should do this on all laptops since the system might be charging the battery or might as you suggest be using it). 

Many people opted to just benchmark the M1 Mini rather than do this to their laptops (for understandable reasons the battery cables on laptops are delicate).

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, leadeater said:

They are basically forced to do it this way on the Apple devices because SoC power isn't exposed, at least that's what I remember from the original M1 reviews. Intel and AMD you can just look at the actual package power value which makes things way easier.

 

Even on AMD, Intel systems the number they provide for CPU power is really only a number you should use to compare between different power stages of the same cpu when changing the clock. Even comparing between different cpus of of the same vendor is a little off.

M1 does expose a load of power draw metrics, many of them ~= to the 'metrics' that AMD and Intel expose, but this is not the total Chip power, apple do provide total package power (this like AMD/Intel is more of a upper bound as reported by the power delivery) and unlike intel/AMD systems it includes power for memory and many other bits and bob. 

The only light comparison is to compare the real systems at the power draw they draw after all unless your a OEM laptop vendor this is the only metric that matters and if you are an OEM laptop vendor this is still the only metric that matters when comparing to apple since you cant go buy M1 chips from your suppliers. 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

They are basically forced to do it this way on the Apple devices because SoC power isn't exposed, at least that's what I remember from the original M1 reviews. Intel and AMD you can just look at the actual package power value which makes things way easier.

 

Probing off the VRMs maybe not even be possible or practical either.

Anandtech measured the M1 power at the SoC and even the core level. Not sure how exactly they did it though or if the same technique would work on the Intel and AMD systems. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, hishnash said:

Even on AMD, Intel systems the number they provide for CPU power is really only a number you should use to compare between different power stages of the same cpu when changing the clock. Even comparing between different cpus of of the same vendor is a little off.

No it's quite accurate, in fact very accurate. What you are pointing at as the issue of what each vendor and board parameters are that affect the CPU power like vcore voltage and LLC, but the reported value itself is very accurate to the power usage.

 

Any half decent reviewer will be going in and setting these types of motherboard values manually and when that's done the margin of differences is very minimal.

 

Edit:

Note comparisons are only valid between vendors and boards under two situations, the same CPU is used or sufficiently large enough sample size (nobody is going to achieve this as an independent reviewer).

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, LAwLz said:

Anandtech measured the M1 power at the SoC and even the core level. Not sure how exactly they did it though or if the same technique would work on the Intel and AMD systems. 

 

Quote

As we had access to the Mac mini rather than a Macbook, it meant that power measurement was rather simple on the device as we can just hook up a meter to the AC input of the device. It’s to be noted with a huge disclaimer that because we are measuring AC wall power here, the power figures aren’t directly comparable to that of battery-powered devices, as the Mac mini’s power supply will incur a efficiency loss greater than that of other mobile SoCs, as well as TDP figures contemporary vendors such as Intel or AMD publish.

https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested

 

The above is what I remembered from the original first ever review of M1 device, since then in the later Pro/Max review they figured out how to do it properly (I forgot heh)

 

Quote

Last year when we reviewed the M1 inside the Mac mini, we did some rough power measurements based on the wall-power of the machine. Since then, we learned how to read out Apple’s individual CPU, GPU, NPU and memory controller power figures, as well as total advertised package power. We repeat the exercise here for the 16” MacBook Pro, focusing on chip package power, as well as AC active wall power, meaning device load power, minus idle power.

https://www.anandtech.com/show/17024/apple-m1-max-performance-review/3

 

And yea down to induvial core power is reported on AMD and Intel systems.

Link to comment
Share on other sites

Link to post
Share on other sites

The Verge confirmed with Apple that (as expected) the base 256GB M2 Air sports a 1x256GB flash storage configuration, just like the base M2 Pro. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×