Jump to content

Intel caught fudging benchmarks in M1 vs Core i7 11th Gen comparison

Jet_ski
2 hours ago, leadeater said:

I'm not saying you did, your example was straight up bad and incorrect because of what you said not the example itself. Everything I said is just pure explainer after that. So I'll make it simple, your Final Cut sv Premier comment was wrong and was not like what Intel did. Intel wasn't swapping software in the performance tests, at no point did Intel compare X software on Mac OS to Y software on Windows. It was always X vs X.

 

All I did after saying your statement was wrong is explain how and why a Final Cut test would be better and why Intel didn't use it. Because you're complaining that Intel choose software that unfairly utilized their hardware features and not Apple's, well of course they did that was the objective, it's marketing.

 

 

Not it is not, you added on to that saying Intel did similar to this which they did not.

 

 

There is only a single test where the software was changed, in the battery life test as using Chrome on Mac OS would definitely be unfair and would not be generating the same power load as Chrome on Windows, that's why Safari was used instead, to make it more fair not less.

 

You specifically pointed towards a criticism that did not actually happen. I'm all for criticizing a company for something they did wrong, I'm however not when it's something they didn't actually do. There's plenty of BS marketing here, but there isn't any funny business with changing software in "like for like" comparisons other than the battery test. Intel could potentially be butting up against laws if they did do that, marketing is often the art of truthful lying.

Again, you never really understood what I was trying to say.

 

Final Cut on Intel macs were fast because it made use of quick sync. Premiere didn't at the time

Topaz lab makes use of some Intel accelerator, on Mac it didnt make use of the NPU in M1

 

Do you get the parallels I was drawing? Sorry that I didnt have any example where the same software was nuked on one OS while works well on the other

 

Literally nothing productive came out of this conversation, since you just assumed random things rather than asking me if I meant it was similar to comparing it to different software. Also, we had this conversation before, but you should really admit to be mistaken and take back your accusation that I made up things, when I clearly didnt

Link to comment
Share on other sites

Link to post
Share on other sites

@RedRound2 @leadeater
Your argument is a prime example of why we need benchmarks! As imperfect as benchmarks like Cinebench are, they are the fairest way to compare the raw performance of different devices. It’s useless to compare app specific performance when they aren’t equally optimized for different devices.

 

Per CPU Monkey for Cinebench R23 single core performance we have:

  • Intel Core i7 1185G7 scored 1538
  • Apple M1 scored 1514
  • Intel Core i7 1165G7 scored 1504

for multi core performance we have:

  • Apple M1 scored 7760
  • Intel Core i7 1185G7 scored 6264
  • Intel Core i7 1165G7 scored 6070

You could go back and forth on which apps are better optimized on which platform. These benchmarks show that Intel’s claims of superior performance are laughable. And in terms of power efficiency, even their lower powered chip loses.

 

www.cpu-monkey.com/en/cpu-apple_m1-1804-amp

Link to comment
Share on other sites

Link to post
Share on other sites

I actually use Topaz Labs' software on the regular, but...

topaz.png.142bd4e52acc9d09ee48be9ce1687ce8.png

 

Yeah.....they just love using Topaz for their marketing. Not denying that there is a gain there, and as a user of that software suite, it is a consideration point for me. BUT, man, they just love to market the shit out of their accelerator performance.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RedRound2 said:

Again, you never really understood what I was trying to say.

No I understood perfectly fine, I objected solely to the part in bold where you said Intel had done something they had not done. I choose to talk about some extra things to do with Final Cut but it seems you've missed the part that I was addressing that was, again, no Intel didn't do what you said they did.

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, Jet_ski said:

Your argument is a prime example of why we need benchmarks! As imperfect as benchmarks like Cinebench are, they are the fairest way to compare the raw performance of different devices. It’s useless to compare app specific performance when they aren’t equally optimized for different devices.

Cinebench is only a singular test based on actual software, it's not better than only running Topaz Labs or Final Cut or Premier save for the fact that we at least know it's more balanced than Topaz Labs is. The fairest way is to test as many as feasible and get an average across those applications, if you really have to exclude extreme outliers or at least explain why the sample point is. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, RedRound2 said:

OS optimizations plays into RAM management, OS snappiness, launching apps, boot up etc. It plays very little into brute force application that use math to do tasks.

 

Take a look at 1st gen Ryzen and Threadripper. When they first launched, Windows performance was utter dogshit compared to on Linux - and that was in maths-based benchmarks - because the Windows scheduler wasn't optimised to deal with multi-ccx CPUs. And these problems stuck around for years - even 2nd gen threadripper performed awfully on Win 10 vs Linux. So yes, this sort of OS-level optimisation for individual pieces of hardware can 100% make a difference to raw math compute performance.

 

Also, stop putting words into my mouth. I never said that the entire reason for the M1's performance was software. I said:

Quote

that tight integration between software and hardware goes a long way to boosting Apple's position

Which, last I checked, is not the same thing. The M1 is a very impressive chip, but in many ways so is Intel's offering.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, tim0901 said:

The M1 is a very impressive chip, but in many ways so is Intel's offering.

The main takeaway I feel with all the current generation low wattage laptop processors is that they're all pretty incredible in some aspect or another. 

 

AMD brought serious multicore chops with strong overall gains from previous generation APUs with the 4000U series, followed up with 5000U.

 

Apple brought incredible performance per watt alongside a whole different instruction set (for desktop platforms), bringing with it major gains in efficiency and improvements in metrics like battery life and noise emissions. 

 

Intel didn't release a product quite as revolutionary in the same manner, but the Xe iGPU is seriously impressive alongside the updated Media Engine alongside other accelerators plus the updated 10nm SF node. 

 

The only thing was that AMD and Apple grabbed a lot of the headlines whilst the Intel part grabbed headlines mostly from its bad marketing. 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, leadeater said:

No I understood perfectly fine, I objected solely to the part in bold where you said Intel had done something they had not done. I choose to talk about some extra things to do with Final Cut but it seems you've missed the part that I was addressing that was, again, no Intel didn't do what you said they did.

No you didn't. I made a clear distinction and drew parallels by bring up an example. Final cut used hardware acceleration, premiere did not. That was whole point. Whatever else, you think I did, is exactly what you think I said and nothing more

12 hours ago, tim0901 said:

Take a look at 1st gen Ryzen and Threadripper. When they first launched, Windows performance was utter dogshit compared to on Linux - and that was in maths-based benchmarks - because the Windows scheduler wasn't optimised to deal with multi-ccx CPUs. And these problems stuck around for years - even 2nd gen threadripper performed awfully on Win 10 vs Linux. So yes, this sort of OS-level optimisation for individual pieces of hardware can 100% make a difference to raw math compute performance.

So are we to assume that Windows hasn't been optimised for Intel? That Intel Macs were severely nuked in optimization on macOS (apart from thermal - which they did address later gen and got it under control)

 

Yeah Windows didn't know how to use CCX initially, but it's pretty stupid to assume that Intel chips has so much more potential just locked away because Intel and microsoft couldn't talk to each other. Give me a break. Especially when Intel is rapidly losing ground in market share

12 hours ago, tim0901 said:

Also, stop putting words into my mouth. I never said that the entire reason for the M1's performance was software. I said:

Which, last I checked, is not the same thing. The M1 is a very impressive chip, but in many ways so is Intel's offering.

I dont need requote you again. You went into great lengths talking about how software optimizations played a huge benefit in M1's performance. I already quoted what you exactly said to leadeater and he chose to ignore the response as he always does when hes proven wrong

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RedRound2 said:

So are we to assume that Windows hasn't been optimised for Intel? T

To a degree, yes. Remember back when I mentioned that Apple's pièce de résistance was their tight hardware:software integration?

 

Even before the M1 days, that was still the case as it was not uncommon for a Mac to perform quite a bit better in terms of performance and battery life to a Windows machine with equivalent hardware. Unlike a lot of Windows vendors, Apple doesn't sell a bazillion computers with hugely varying performance classes. They only sell a few SKUs that belong in roughly an identical group. Adding to how macOS was never intended to be used outside Apple's own machines, it is much easier to optimize the OS for the hardware it is going to be on. The M1 makes this more so because it is developed in-house.

 

Windows doesn't enjoy this same luxury, simply because you expect it to run on a hugely varied range of hardware ranging from the crappiest el-cheapo laptops running on Celerons to bastardized high-performance machines with 8 cores or more. This disparity will likely become more apparent when all of Apple's consumer-"prosumer" machines make the switch to Apple's own silicon.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, RedRound2 said:

So are we to assume that Windows hasn't been optimised for Intel? That Intel Macs were severely nuked in optimization on macOS (apart from thermal - which they did address later gen and got it under control)

 

I'm fed up of this argument. I've written a whole rant on optimisation in the spoiler but I don't expect that to change your mind.
 

Spoiler

 

You realize that "optimised" is a rather fluffy word right?

 

Windows is as "optimised" for Intel as it can be. But that doesn't mean that Apple can't do better with their OS, because they don't have to think about stuff like legacy compatability and a huge array of hardware configurations. Windows and MacOS have completely different design goals, which will limit how much you can optimise for a given set of hardware.

 

Think of it like the difference between a for loop and some explicitly coded statements:

Spoiler

 









for(int i = 0; i < 3; i++){
	do_something(i);
}

vs









do_something(0);
do_something(1);
do_something(2);

 

The for loop is a more flexible solution - you can easily make it deal with more iterations - but it has a measurable overhead compared to the explicit statements due to the branching involved. The explicit statements are faster, but can't deal with anything other than what's expected.

 

Windows is the flexible for loop here. It's all about running on pretty much any hardware configuration and prides itself on still being able to run your legacy programs from 2005. MacOS, on the other hand, is the explicit statements: it couldn't give a fuck about old software, it just wants modern software to run as fast as possible on the select few hardware configurations that they want you to use.

 

And this tradeoff has a performance penalty. Windows, because it needs to work on pretty much anything, will spend ages going through complex trees to dynamically figure out how to utilise your system's hardware. Whilst MacOS can simply compare your device id to a lookup table and instantly know what's best, but can't deal with unexpected hardware configurations as well.

 

This is just one simple example of course.

 

So instead I'm just going to show you proof of what I'm talking about.

 

Here's a geekbench run for the 16 inch MacBook Pro, with an i9-9880H.

Here's another, with identical hardware, but now running on Windows through Bootcamp(which is not virtualisation).

 

As you can see, there's a 6% performance difference here - in both single-core and multi-core performance - ON THE EXACT SAME MACHINE. The only difference is that now it's running Windows, rather than MacOS. If both OS's were optimised equally for the hardware, then we should see no performance difference. But we do, because MacOS is better optimised for that specific hardware configuration.

 

That 6% right there is the OS-level benefit I've been talking about, the one you've been claiming doesn't exist. If you're now trying to compare a new Windows laptop to the i9 in this Mac, which of the above Geekbench scores should you choose? The Bootcamp score is a fairer comparison as you're keeping the software suite equal, but it isn't how most people use their Macs. But if you make the comparison using the MacOS score, you're now clearly handing a ~6% performance benefit to the Mac by doing so. You have to make a compromise, but you should be open about the consequences of that decision. It's just like DLSS or other GPU technologies in that sense - you have to take their benefits into account when comparing hardware solutions.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, RedRound2 said:

No you didn't. I made a clear distinction and drew parallels by bring up an example. Final cut used hardware acceleration, premiere did not. That was whole point. Whatever else, you think I did, is exactly what you think I said and nothing more

Um sorry but no, go back to the quote, look at the bold part you said, you literally said it. It was wrong, simple as that. Your clear distinction was destroyed when you said Intel did something they didn't do. Like I said I don't at all disagree with the comparison or example you talked about, that is true, it's just not true that Intel did similar.

 

Intel never swap software in any of the performance test, you claimed Intel did something similar, no they did not.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/10/2021 at 12:49 PM, D13H4RD said:

To a degree, yes. Remember back when I mentioned that Apple's pièce de résistance was their tight hardware:software integration?

 

Even before the M1 days, that was still the case as it was not uncommon for a Mac to perform quite a bit better in terms of performance and battery life to a Windows machine with equivalent hardware. Unlike a lot of Windows vendors, Apple doesn't sell a bazillion computers with hugely varying performance classes. They only sell a few SKUs that belong in roughly an identical group. Adding to how macOS was never intended to be used outside Apple's own machines, it is much easier to optimize the OS for the hardware it is going to be on. The M1 makes this more so because it is developed in-house.

 

Windows doesn't enjoy this same luxury, simply because you expect it to run on a hugely varied range of hardware ranging from the crappiest el-cheapo laptops running on Celerons to bastardized high-performance machines with 8 cores or more. This disparity will likely become more apparent when all of Apple's consumer-"prosumer" machines make the switch to Apple's own silicon.

 

17 hours ago, tim0901 said:

 

I'm fed up of this argument. I've written a whole rant on optimisation in the spoiler but I don't expect that to change your mind.
 

  Hide contents

 

You realize that "optimised" is a rather fluffy word right?

 

Windows is as "optimised" for Intel as it can be. But that doesn't mean that Apple can't do better with their OS, because they don't have to think about stuff like legacy compatability and a huge array of hardware configurations. Windows and MacOS have completely different design goals, which will limit how much you can optimise for a given set of hardware.

 

Think of it like the difference between a for loop and some explicitly coded statements:

  Reveal hidden contents

 












for(int i = 0; i < 3; i++){
	do_something(i);
}

vs












do_something(0);
do_something(1);
do_something(2);

 

The for loop is a more flexible solution - you can easily make it deal with more iterations - but it has a measurable overhead compared to the explicit statements due to the branching involved. The explicit statements are faster, but can't deal with anything other than what's expected.

 

Windows is the flexible for loop here. It's all about running on pretty much any hardware configuration and prides itself on still being able to run your legacy programs from 2005. MacOS, on the other hand, is the explicit statements: it couldn't give a fuck about old software, it just wants modern software to run as fast as possible on the select few hardware configurations that they want you to use.

 

And this tradeoff has a performance penalty. Windows, because it needs to work on pretty much anything, will spend ages going through complex trees to dynamically figure out how to utilise your system's hardware. Whilst MacOS can simply compare your device id to a lookup table and instantly know what's best, but can't deal with unexpected hardware configurations as well.

 

This is just one simple example of course.

 

So instead I'm just going to show you proof of what I'm talking about.

 

Here's a geekbench run for the 16 inch MacBook Pro, with an i9-9880H.

Here's another, with identical hardware, but now running on Windows through Bootcamp(which is not virtualisation).

 

As you can see, there's a 6% performance difference here - in both single-core and multi-core performance - ON THE EXACT SAME MACHINE. The only difference is that now it's running Windows, rather than MacOS. If both OS's were optimised equally for the hardware, then we should see no performance difference. But we do, because MacOS is better optimised for that specific hardware configuration.

 

That 6% right there is the OS-level benefit I've been talking about, the one you've been claiming doesn't exist. If you're now trying to compare a new Windows laptop to the i9 in this Mac, which of the above Geekbench scores should you choose? The Bootcamp score is a fairer comparison as you're keeping the software suite equal, but it isn't how most people use their Macs. But if you make the comparison using the MacOS score, you're now clearly handing a ~6% performance benefit to the Mac by doing so. You have to make a compromise, but you should be open about the consequences of that decision. It's just like DLSS or other GPU technologies in that sense - you have to take their benefits into account when comparing hardware solutions.

 

Both of you are saying the same thing, yet you either missed or ignored a previous probably 5% performance overhead that may come due to the degree of optimization between each OS  (quoted myself below). Yet we don't see a 5% difference in anything here, much more than that which says something about the hardware here, not optimizations. Even if Apple had full control of hardware stack, they can only make things significantly faster by either using space-age technology in silicon (it's a joke - dont jump on me for this) or by using dedicated accelerators, which doesnt seem to be the case so far apart from Rosetta 2. But forget Apple, it's Intel here who is seemlingly in the lead, and the original conclusion is that they cherry picked and swapped laptops to make things look better for them - when in reality Apple's low power, almost fanless M1 chip is competing handly with i7

On 2/9/2021 at 11:15 AM, RedRound2 said:

And we dont really see a 5% difference in those benchmarks, do we? Those 5-10% could be due to OS overhead, but that clearly doesnt seem to be the case

Specifically for tim0901, you can rant all you want about some magical optimizations. But OS doesnt really have much to do when you focus your tests on dedicated program that's purely brute force. aybe a maximum overhead of 5-10%, on mature platforms like Windows and macOS

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, leadeater said:

Um sorry but no, go back to the quote, look at the bold part you said, you literally said it. It was wrong, simple as that. Your clear distinction was destroyed when you said Intel did something they didn't do. Like I said I don't at all disagree with the comparison or example you talked about, that is true, it's just not true that Intel did similar.

 

Intel never swap software in any of the performance test, you claimed Intel did something similar, no they did not.

Good god. In what context does like mean the exact same? I assume I dont have to quote the definition of word like. I drew a comparison, and by no means it was a perfect comparison but it was enough to get the point across - and you yourself dont disagree with the example I gave, so what's the issue?

 

 You made an incorrect conclusion. Just deal with it and move on.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, RedRound2 said:

Good god. In what context does like mean the exact same?

It doesn't but your example is nothing like what Intel did. The problem with your example is it has a very specific and defined issue with it, the issue is primarily the change in software so there is nothing like this example other than actually changing software. You cannot claim something is like something else if they are in no way a similar thing. The usage of like and it's definition isn't the issue, it's your belief that it is similar in the first place.

 

If you honestly believe what Intel did is similar then fine that is not likely something I can change your mind on I expect. So I won't attempt to.

 

However switching software if it were done is on so much higher degree of improper conduct than simply choose software that is only optimized correctly for your hardware that I cannot justify anything other than actually this to be "like" this. It's a whole different level of bad.

 

33 minutes ago, RedRound2 said:

 You made an incorrect conclusion. Just deal with it and move on.

The only conclusion I made is that your usage of like was improper and badly used, if you have a difference of opinion on that then it doesn't make mine incorrect. If we cannot agree then it's nothing more than difference of opinion, nobody is really incorrect. Just I believe the seriousness of your example to be far higher than what actually took place it's not proper to say it's like this.

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, RedRound2 said:

Both of you are saying the same thing, yet you either missed or ignored a previous probably 5% performance overhead that may come due to the degree of optimization between each OS  (quoted myself below). Yet we don't see a 5% difference in anything here, much more than that which says something about the hardware here, not optimizations. Even if Apple had full control of hardware stack, they can only make things significantly faster by either using space-age technology in silicon (it's a joke - dont jump on me for this) or by using dedicated accelerators, which doesnt seem to be the case so far apart from Rosetta 2.

Like we said, Apple's M1 Macs are fast not just because of supremely tight hardware:software integration but also because the chip itself is fast as heck. I thought the point would've been drilled into the ground so deeply to the point where oil could be extracted at this point. The chip alone is fast but Apple's software engineering team has also done wonders in tuning their software to intelligently utilize all of the chip in a way that allows its performance to be used in the most efficient manner that benefits overall day-to-day performance, battery life and such. It's not discrediting Apple's silicon engineers at all because the M1 chip alone is already very well-known to be a hugely impressive chip at this point. It's just that not enough credit is given to Apple's software team in intelligently utilizing the resources at hand that allows the M1 Macs to perform as well as they did, particularly the MacBook Air, which has now become the ultimate ultraportable for its blend of great overall performance, stellar battery life and being silent thanks to not having a fan, without adverse thermal throttling outside of sustained heavy loads.

 

35 minutes ago, RedRound2 said:

it's Intel here who is seemlingly in the lead, and the original conclusion is that they cherry picked and swapped laptops to make things look better for them - when in reality Apple's low power, almost fanless M1 chip is competing handly with i7

I'm not sure what else you were expecting from a presentation with results done by Intel themselves. Them to just say "Yeah, our 45W+++ Core i7 10750H is handily beaten to a pulp by an ARM processor consuming at most 20-ish watts on a MacBook Pro"? Not that it matters at this point because the M1 has been out for more than long enough for independent benchmarks to weigh in.

 

Yes, I know the comparison is done with a 1185G7/1165G7 but you get my point, in the sense that it can compete with a higher TDP processor in terms of performance.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, D13H4RD said:

I'm not sure what else you were expecting from a presentation with results done by Intel themselves. Them to just say "Yeah, our 45W+++ Core i7 10750H is handily beaten to a pulp by an ARM processor consuming at most 20-ish watts on a MacBook Pro"? Not that it matters at this point because the M1 has been out for more than long enough for independent benchmarks to weigh in.

Just for clarification it should be: "Yeah, our 45W+++ CPU in Core i7 10750H is handily beaten to a pulp by an ARM CPU and integrated GPU that when running full speed together consumes some 20-ish watts on a MacBook Pro"

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Spindel said:

Just for clarification it should be: "Yeah, our 45W+++ CPU in Core i7 10750H is handily beaten to a pulp by an ARM CPU and integrated GPU that when running full speed together consumes some 20-ish watts on a MacBook Pro"

Not that it actually needs to run at full-speed

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, D13H4RD said:

Not that it actually needs to run at full-speed

True. 

But consider that when Anandtech measured the M1 Mini running full tilt they got 35 W at the wall socket. So that's the entire system (inducing the ridiculous 150 W PSU that can not be running that efficiently in the M1 Mini), fans, SSD, network card, BT etc. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Spindel said:

True. 

But consider that when Anandtech measured the M1 Mini running full tilt they got 35 W at the wall socket. So that's the entire system (inducing the ridiculous 150 W PSU that can not be running that efficiently in the M1 Mini), fans, SSD, network card, BT etc. 

Yep, and that's a mini-desktop that has to be plugged into a wall. Nuts.

 

I still think the MacBook Air is the most impressive of them all. Fanless and thin yet runs spritely especially in bursts whilst lasting a damn long while on a charge.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

Intel fudging benchmarks in presentations? Say it ain't so!

 

Seriously, what did you expect?

 

On a side note, Apple did the exact same in their presentation of the M1 by boasting about multipliers with no reference soooooooo...... wait for real world benchmarks folks.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/11/2021 at 3:25 PM, D13H4RD said:

Like we said, Apple's M1 Macs are fast not just because of supremely tight hardware:software integration but also because the chip itself is fast as heck. I thought the point would've been drilled into the ground so deeply to the point where oil could be extracted at this point. The chip alone is fast but Apple's software engineering team has also done wonders in tuning their software to intelligently utilize all of the chip in a way that allows its performance to be used in the most efficient manner that benefits overall day-to-day performance, battery life and such. It's not discrediting Apple's silicon engineers at all because the M1 chip alone is already very well-known to be a hugely impressive chip at this point. It's just that not enough credit is given to Apple's software team in intelligently utilizing the resources at hand that allows the M1 Macs to perform as well as they did, particularly the MacBook Air, which has now become the ultimate ultraportable for its blend of great overall performance, stellar battery life and being silent thanks to not having a fan, without adverse thermal throttling outside of sustained heavy loads.

This is a statement I agree with. I have no issues whatsoever with this argument. I originally replied to the guy who went writing paragraphs about how apple achieved what they did with software optimizations. As you said now and I said before, things like battery life, snappiness, big.little core management can all be attributed to software OS optimizations that does indeed make a significant difference in performance. My point being that in hard brute force tasks, the effect of those optimizations are minimised, given that the OS is matured and has certain basics in place like tasks proprity and etc, both of which applies to Windows and macOS

On 2/11/2021 at 3:25 PM, D13H4RD said:

I'm not sure what else you were expecting from a presentation with results done by Intel themselves. Them to just say "Yeah, our 45W+++ Core i7 10750H is handily beaten to a pulp by an ARM processor consuming at most 20-ish watts on a MacBook Pro"? Not that it matters at this point because the M1 has been out for more than long enough for independent benchmarks to weigh in.

 

Yes, I know the comparison is done with a 1185G7/1165G7 but you get my point, in the sense that it can compete with a higher TDP processor in terms of performance.

They should've continued to stay quite instead of embarassing themselves, was my point. For some reason they're more bitter about Apple and M1 than AMD

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, RedRound2 said:

They should've continued to stay quite instead of embarassing themselves, was my point. For some reason they're more bitter about Apple and M1 than AMD

For once, AMD's marketing department actually decided to do the smart thing and not directly attack the M1 in their PR and marketing statements. 

 

Which is about the smartest thing they've done in years given their track record. 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/11/2021 at 3:16 PM, leadeater said:

It doesn't but your example is nothing like what Intel did. The problem with your example is it has a very specific and defined issue with it, the issue is primarily the change in software so there is nothing like this example other than actually changing software. You cannot claim something is like something else if they are in no way a similar thing. The usage of like and it's definition isn't the issue, it's your belief that it is similar in the first place.

I am really not sure what is up with you to seemingly not being unable to make any connection whatsoever.

 

I can eat orange, but feel nauseous when I eat grapes

I can travel by train, but feel nauseous when I fly.

 

Does that mean grapes can fly? Or train in an orange. Oh but wait, it's a wrong comparison, because I cannot compare and edible thing with travel? It's not the same. The airlines company never intended to sell me oranges

 

I feel nausous whe I fly, like when I eat grapes

 

This is literally what your argument is. Make up random connections instead of obvious one that I pointed out to you like at least 15 times in this thread by now

 

Few years ago, Final Cut superior rendering performance was due to QuickSync acceleration, as opposed to Premiere's which did not take advantage of QuickSync

Topaz lab's software on Windows made use of the ML accelerator on Intel, while it did not make use of NPU in M1. 

A fair comparison would be to either take advatage of both accelerators, or make the CPU itself on both do the work

 

If you still dont get it, I'll dumb it down further for you. Imagine accelerators being discreet components rather than being included on die in processor, like a graphics card. Going with those lines, Intel system made use of a dedicated graphics card to show their superior perf in games, while they ran the game on CPU on mac

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×