Jump to content

The M1 Benchmarks Continue - Emulated performance appears to *still* outperform any intel-based Mac

Qub3d

Note to mods: I am aware of this post on the general M1 Benchmarks. This post is noting the performance of an M1 chip when emulating x86. I believe this is significant because this performance is what is going to shape the user experience initially, not the other general benchmark.

 

Summary

 

We've already started to see native benchmarks for Apple's New M1 Chip pop up in various places, including on this forum:

However, some exciting benchmarks showing emulated performance were spotted today, The short of it, as noted on Hacker News: "Apple Silicon M1 Emulating x86 Is Still Faster Than Every Other Mac".

spacer.png

 

 

Quotes

Quote

Since this version of Geekbench is running through Apple's translation layer Rosetta 2, an impact on performance is to be expected. Rosetta 2 running x86 code appears to be achieving 78%-79% of the performance of native Apple Silicon code.

Despite the impact on performance, the single-core Rosetta 2 score results still outperforms any other Intel Mac, including the 2020 27-inch iMac with Intel Core i9-10910 @ 3.6GHz.

 

My thoughts

I cannot overstate how huge this is. The emulation performance is going to be critical in the initial year or two while developers work on porting various applications. If these benchmarks hold true, this means that out of the box today a new Mac product with an M1 chip is going to outperform on virtually any application when compared to Intel-based Macs.

 

As a developer, this is what will be the difference between me asking my manager for an upgrade when my current laptop's life cycle is up, versus opting to wait. Clearly, Apple knows this is the case for lots of similar-minded people, who are not just interested in the newest thing but need something reliable to get their jobs done.

 

Sources

https://www.macrumors.com/2020/11/15/m1-chip-emulating-x86-benchmark/

https://www.cultofmac.com/727483/rosetta-m1-mac-emulate-intel-software-performance/

https://browser.geekbench.com/v5/cpu/4731213

Edited by Qub3d
added benchmark link

F#$k timezone programming. Use UTC! (See XKCD #1883)

PC Specs:

Ryzen 5900x, MSI 3070Ti, 2 x 1 TiB SSDs, 32 GB 3400 DDR4, Cooler Master NR200P

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I feel sad for Intel for the pressure on the mac side (M1) and the pc side (Ryzen).

They are the underdog right now.

Ryzen 5700g @ 4.4ghz all cores | Asrock B550M Steel Legend | 3060 | 2x 16gb Micron E 2666 @ 4200mhz cl16 | 500gb WD SN750 | 12 TB HDD | Deepcool Gammax 400 w/ 2 delta 4000rpm push pull | Antec Neo Eco Zen 500w

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Qub3d said:

when compared to Intel-based Macs.

keyword here, although i think the performance is insane, the x86 emulation doesn't look as insane.

 

kudos to Apple/Mac though.

AMD blackout rig

 

cpu: ryzen 5 3600 @4.4ghz @1.35v

gpu: rx5700xt 2200mhz

ram: vengeance lpx c15 3200mhz

mobo: gigabyte b550 auros pro 

psu: cooler master mwe 650w

case: masterbox mbx520

fans:Noctua industrial 3000rpm x6

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, SupaKomputa said:

They are the underdog right now.

can't be an underdog, if you can't pull through with a better performing processor.

AMD blackout rig

 

cpu: ryzen 5 3600 @4.4ghz @1.35v

gpu: rx5700xt 2200mhz

ram: vengeance lpx c15 3200mhz

mobo: gigabyte b550 auros pro 

psu: cooler master mwe 650w

case: masterbox mbx520

fans:Noctua industrial 3000rpm x6

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Qub3d said:

 

 

As a developer, this is what will be the difference between me asking my manager for an upgrade when my current laptop's life cycle is up, versus opting to wait. Clearly, Apple knows this is the case for lots of similar-minded people, who are not just interested in the newest thing but need something reliable to get their jobs done.

 

A 20% performance cut versus native is nothing to sneeze at, however if you go look at most CPU's out there, a 20% improvement is the difference between a Intel Core i7-8557U @ 1.70GHz and a Intel Core i7-1165G7 @ 2.80GHz which is a 40% clock increase.

 

Like, this is OK, but the comparison isn't realistic. A program that uses instructions that are harder to emulate, is going to run so much slower, if at all.

 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Qub3d said:

Note to mods: I am aware of this post on the general M1 Benchmarks. This post is noting the performance of an M1 chip when emulating x86. I believe this is significant because this performance is what is going to shape the user experience initially, not the other general benchmark.

 

Summary

 

We've already started to see native benchmarks for Apple's New M1 Chip pop up in various places, including on this forum:

However, some exciting benchmarks showing emulated performance were spotted today, The short of it, as noted on Hacker News: "Apple Silicon M1 Emulating x86 Is Still Faster Than Every Other Mac".

spacer.png

 

 

Quotes

 

My thoughts

I cannot overstate how huge this is. The emulation performance is going to be critical in the initial year or two while developers work on porting various applications. If these benchmarks hold true, this means that out of the box today a new Mac product with an M1 chip is going to outperform on virtually any application when compared to Intel-based Macs.

 

As a developer, this is what will be the difference between me asking my manager for an upgrade when my current laptop's life cycle is up, versus opting to wait. Clearly, Apple knows this is the case for lots of similar-minded people, who are not just interested in the newest thing but need something reliable to get their jobs done.

 

Sources

https://www.macrumors.com/2020/11/15/m1-chip-emulating-x86-benchmark/

https://www.cultofmac.com/727483/rosetta-m1-mac-emulate-intel-software-performance/

https://browser.geekbench.com/v5/cpu/4731213

If only Windows on ARM could be like this ! Won't windows on ARM also improve, because if the devs (3rd party ones like Adobe) start to make their apps on ARM for Mac, they can do so for Windows on ARM too right ?

Link to comment
Share on other sites

Link to post
Share on other sites

When we will see real review / benchmark.

2020 is an exciting year for hardware.

Ryzen 5700g @ 4.4ghz all cores | Asrock B550M Steel Legend | 3060 | 2x 16gb Micron E 2666 @ 4200mhz cl16 | 500gb WD SN750 | 12 TB HDD | Deepcool Gammax 400 w/ 2 delta 4000rpm push pull | Antec Neo Eco Zen 500w

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Kisai said:

A 20% performance cut versus native is nothing to sneeze at, however if you go look at most CPU's out there, a 20% improvement is the difference between a Intel Core i7-8557U @ 1.70GHz and a Intel Core i7-1165G7 @ 2.80GHz which is a 40% clock increase.

 

Like, this is OK, but the comparison isn't realistic. A program that uses instructions that are harder to emulate, is going to run so much slower, if at all.

 

I suppose I should look at clarifying what Rosetta 2 is doing. I use "emulate" only in the sense of a program built for x86 is running at the hardware level on ARM. What Rosetta is doing mostly is "transpiling" at the bytecode level when apps are first installed, and only falling back to JIT emulation as a last resort. That helps a lot with difficult instructions -- although going from CISC to RISC is not an easy path.
 

Additionally, Rosetta can only support older instructions as newer ones are still under patents. I have yet to see how Apple intends to handle this, but given that it mainly covers AVX-512 I'm not too worried for day-to-day stuff.

F#$k timezone programming. Use UTC! (See XKCD #1883)

PC Specs:

Ryzen 5900x, MSI 3070Ti, 2 x 1 TiB SSDs, 32 GB 3400 DDR4, Cooler Master NR200P

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Qub3d said:

-- although going from CISC to RISC is not an easy path.

x86 has been RISC internally since Pentium Pro. Apple probably just figured out a clever way to do a simillar microcode translation. 
 

 

Anyway RIP x86, finally you’ll die 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Spindel said:

x86 has been RISC internally since Pentium Pro. Apple probably just figured out a clever way to do a simillar microcode translation. 
 

 

Anyway RIP x86, finally you’ll die 

I highly doubt that. We will see once we get official reviews and not one benchmark and trying to extrapolate from there. Plus there are still many things that x86 has a strong lead that won't be made up. Trying to run games on an arm system is not going to be a good experience anytime soon. Sure there maybe cases where it works just fine but just like Linux gaming it will be pretty hit or miss. 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Brooksie359 said:

I highly doubt that. We will see once we get official reviews and not one benchmark and trying to extrapolate from there. Plus there are still many things that x86 has a strong lead that won't be made up. Trying to run games on an arm system is not going to be a good experience anytime soon. Sure there maybe cases where it works just fine but just like Linux gaming it will be pretty hit or miss. 

Excuses.
 

I promise you my Nintendo Switch (an ARM system) runs games perfectly fine.

 

But to be honest with the Mac I’m not going to be able to argue against this point since Mac has never been stellar in terms of available games (I do game from time to time on my current mac (<3 Civ6)).

 

There are other uses for computers than games. And both Linux and Macs lack of support of games is not determined by the CPU architecture. 
 

Arguments like this against the M1s performance feels more like grasping for straws.

Link to comment
Share on other sites

Link to post
Share on other sites

I really look forward to real reviews.  I'm particularly interested in how well the GPU portion keeps up with low end discrete cards and AMD APU, since it appears it will trounce intel igpu.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, justpoet said:

I really look forward to real reviews.  I'm particularly interested in how well the GPU portion keeps up with low end discrete cards and AMD APU, since it appears it will trounce intel igpu.

For application performance the M1 will probably trounce AMD APUs as well, that's pretty well the purpose of the GPU in the M1 so it'll be optimized a lot for that I imagine. Either way the raw compute of AMD APUs is slightly lower than the M1 GPU so you do have to step up to dGPU to get faster, but you'll have the same problem where the M1 would have been optimized for a purpose and an RX 580 is much more general, so while being 2x faster than the M1 GPU I won't be surprised to see the M1 GPU give better in application performance for Final Cut Pro etc.

Link to comment
Share on other sites

Link to post
Share on other sites

At least now we know why Apple has been running their Intel Macbooks at TJMax for the last few years.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Spindel said:

Excuses.
 

I promise you my Nintendo Switch (an ARM system) runs games perfectly fine.

 

But to be honest with the Mac I’m not going to be able to argue against this point since Mac has never been stellar in terms of available games (I do game from time to time on my current mac (<3 Civ6)).

 

There are other uses for computers than games. And both Linux and Macs lack of support of games is not determined by the CPU architecture. 
 

Arguments like this against the M1s performance feels more like grasping for straws.

You said this was the death of x86 chips. I was saying that x86 chips are here to stay and that they still have their place and use cases. I think you are the one who is being single minded on what the use case of a computer is if you honestly believe that arm will take over and x86 chips will die out. Also a Nintendo switch is not the high performance gaming that x86 processors are known for and the switch still is far from having as many titles as windows 10 running on the x86 architecture. 

Link to comment
Share on other sites

Link to post
Share on other sites

This makes me wonder though, why are their higher priced models still on Intel CPU's when the M1 is apparently much faster? 

 

https://www.apple.com/shop/buy-mac/macbook-pro/13-inch

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, FloRolf said:

This makes me wonder though, why are their higher priced models still on Intel CPU's when the M1 is apparently much faster? 

 

https://www.apple.com/shop/buy-mac/macbook-pro/13-inch

Because they started with switching out their "low-end" models. It just happeneds to be that they seem to outperform their high-end models with intel CPUs. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Letgomyleghoe said:

keyword here, although i think the performance is insane, the x86 emulation doesn't look as insane

Rosseta2 is not an emulation layer, it translates the app at first run from x86-64 assembly to ARM64 assembly the runs the app as a native ARM64 application.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, FloRolf said:

This makes me wonder though, why are their higher priced models still on Intel CPU's when the M1 is apparently much faster? 

 

https://www.apple.com/shop/buy-mac/macbook-pro/13-inch

IO, the M1 only supports 2 USB-4 ports. That is why the 4 port is still intel.

The 2 Port MBP should never have had the MBP label it was always just an MBA in a MBP case. I hope when they do the 14" refresh.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh and remember that a substantial amount of the score is from the AES portion of GB. Which probably punishes the M1 a bit when running through Rosetta :P 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Brooksie359 said:

You said this was the death of x86 chips. I was saying that x86 chips are here to stay and that they still have their place and use cases. I think you are the one who is being single minded on what the use case of a computer is if you honestly believe that arm will take over and x86 chips will die out. Also a Nintendo switch is not the high performance gaming that x86 processors are known for and the switch still is far from having as many titles as windows 10 running on the x86 architecture. 

Of course x86 won't die because of this. For example Windows OEMs only have access to bad ARM chips so they will be stuck with x86 for the foreseeable future.

For Apple however, I think this demonstrates that there is no reason to keep using x86 for them in upcoming products.

 

 

2 minutes ago, FloRolf said:

This makes me wonder though, why are their higher priced models still on Intel CPU's when the M1 is apparently much faster? 

 

https://www.apple.com/shop/buy-mac/macbook-pro/13-inch

Because you don't replace your entire product lineup straight out of the game when migrating to a new arhcitecture.

You want to "slowly" (if you can call a couple of years slow) migrate over to it in order to make the transition smooth and iron out any bugs during the transition. There is also the Osborne effect to take into consideration. Just replacing all their computers at once would make their recent customers feel neglected, and it would also make people not want to buy any Intel based Mac.

Replacing all your products directly would be suicide. Imagine if there is some issue and they have to do recalls. You don't want to recall all your products at once. You want to recall a small portion of it.

 

So Apple basically have to keep selling Intel Macs. Not only because of the reasons stated above but also because there is no true high end option yet. The Intel Macbooks, at least at the highest end, probably still edges out the M1 in terms of performance. Probably slightly in CPU, and even more in terms of GPU.

Plus, the M1 do not support all the connectivity and features the Intel based Macs support. You get fewer ports and a lack of things such as eGPU support. The M1 Macs support other things, but it makes perfect sense for Apple to offer both solutions until they reach better feature parity on the M1 Macs.

 

So I think those are all more than enough justifications for Apple to keep selling the Intel based Macs despite the M1 Macs being overall better.

As for why the Intel based Macs are more expensive? 

1) It's only a 100 dollar difference once you make the M1 Mac match the Intel Mac in terms of storage and RAM (the M1 starts off with half the RAM and half the storage).

2) The price is probably based on the cost of making the machines, not a reflection of the performance you get.

 

The M1 is probably cheaper (in terms of BoM) than the Intel Mac to build. The Intel Mac requires far more components and a lot of the prices are set by Intel rather than Apple themselves. Buying a bunch of stuff from Intel is most likely more expensive than Apple building it themselves. Right?

Plus Apple probably wants some margins on their stuff. So if it costs them 200 dollars to buy an Intel CPU they probably want to sell it for 220 dollars (10% margin). If their own SoC costs 100 dollars to make they might be selling it for 110 dollars (10% margin). So the margins are the same, but the price difference to customers is 110 dollars.

 

 

If Apple had (for some reason) lowered the price of the Intel Mac just because their own, cheaper to make M1 Mac offered higher performance then Apple would:

1) Making less money on the Intel Mac, without gaining anything. Just straight up lowering their margins just because they have released a better product.

2) Incentivizing people to keep buying Intel Macs, which are more expensive for Apple to make and goes against their ARM transitioning plan.

 

You have to think of this from the POV of Apple as a business.

It's easy for us as customers to say "why are they charging 100 dollars more for a worse product? Why would I want to buy that?" but once you think of it as "why would we lower the prices of our competitors product for no reason?" it makes more sense.

 

 

I don't think Apple selling Intel Macs at a slight price premium is the same as them saying "our chips aren't as good as Intel's so we can't charge as much for them".

I think it's far more logical and reasonable to assume them selling Intel Macs at a premium is them saying "our chips are cheaper despite performing better than Intel's, so please buy them instead".

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Spindel said:

Oh and remember that a substantial amount of the score is from the AES portion of GB. Which probably punishes the M1 a bit when running through Rosetta :P 

From anandtech breakdown the firestorm cores have 4 times the floating instruction point throughput per cycle of the best x86 cpus on the market! so given rosseta2 to is not an emulation layer but rather a binary translation layer that converts the x86-64 Assembly into xARM64 assembly when you first run/install the application i'm not that surprised.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Apple never make a move like this unless they are sure.

When they moved from IBM GX chips to Intel it was because a G5 macbook was impossible to power, IBM couldn't make their chip draw less power for more processing power.

Apple don't get what they want they walk.

 

Intel which has been developing for years has finally hit that same point 10nm++++++++++++++++++++++++++++++++++++++

X86 has been stuck at roughtly the same Ghz for years with mild IPC increases.

 

Here is hoping what Apple does with the M1 will do to the desktop space what Apple did to the mobile space.

I am not an Apple fan boy but.. when the competition is forced innovation to survive us the comsumer really benefit.

 

If AMD didn't release Ryzen and Threadripper do you think you'd have more than 4 cores and 8 threads CPU's for what is now the i9 pricing.

no you wouldn't have any of that.. you would have a 20% increase in clock speed with no extra anything like we've had for 15 years.

 

 

CPU | AMD Ryzen 7 7700X | GPU | ASUS TUF RTX3080 | PSU | Corsair RM850i | RAM 2x16GB X5 6000Mhz CL32 MOTHERBOARD | Asus TUF Gaming X670E-PLUS WIFI | 
STORAGE 
| 2x Samsung Evo 970 256GB NVME  | COOLING 
| Hard Line Custom Loop O11XL Dynamic + EK Distro + EK Velocity  | MONITOR | Samsung G9 Neo

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

For application performance the M1 will probably trounce AMD APUs as well, that's pretty well the purpose of the GPU in the M1 so it'll be optimized a lot for that I imagine. Either way the raw compute of AMD APUs is slightly lower than the M1 GPU so you do have to step up to dGPU to get faster, but you'll have the same problem where the M1 would have been optimized for a purpose and an RX 580 is much more general, so while being 2x faster than the M1 GPU I won't be surprised to see the M1 GPU give better in application performance for Final Cut Pro etc.

If there’s one advantage Intel based Macs or any laptop right now over the M1 Macs is that the former can drive two 4K displays. The M1 can only drive a single 4K display. Which makes me wonder if Apple wasn’t able to add more PCIe lanes for the M1 chip. This is probably the reason why the new Mac mini only has two TB ports while the older one with an 8th gen i5/i7 has four TB ports. Not to mention that the older Mac mini has an optional 10 Gb ethernet, the new one is limited to 1Gb.

 

Edited by like_ooh_ahh
I read it again, my bad 🙃https://appleinsider.com/articles/20/11/11/how-apple-silicon-on-a-m1-mac-changes-monitor-support-and-what-you-can-connect

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×