Jump to content

Apple M1 Ultra - 2nd highest multicore score, lost to 64-core AMD Threadripper.

TheReal1980

Summary

Looks like we have a rocket on our hands! Second highest multicore score of any CPU, not bad.

 

m1-ultraa-benchmark.jpg

 

Quotes

Quote

Labeled Mac13,2, the Mac Studio with 20-core M1 Ultra that was benchmarked earned a single-core score of 1793 and a multi-core score of 24055.

Comparatively, the highest-end Mac Pro with 28-core Intel Xeon W chip has a single-core score of 1152 and a multi-core score of 19951, so the M1 Ultra is 21 percent faster in this particular benchmark comparison when it comes to multi-core performance. As for single-core performance, the M1 Ultra is 56 percent faster than the 28-core Mac Pro.

 

My thoughts

This thing is a rocket it seems. Higher score than the Intel 12900 and almost as fast in multi-core score as the 3990X 64-core AMD Threadripper (which is #1). Apple moving forward, looking forward to a octa-CPU Mac Pro in the near future.

 

Sources

https://forums.macrumors.com/threads/m1-ultra-outperforms-28-core-intel-mac-pro-in-first-leaked-benchmark.2337039/

If it ain´t broke don't try to break it.

Link to comment
Share on other sites

Link to post
Share on other sites

Course take it with a huge grain of salt as well geekbench is a specific set of instructions and as proven with the regular m1 and it's current variations is not a reflection of real world performance.

 

No doubt the m1 is going to perform well. Just gotta keep in mind that you aren't buying one of the best cpu's of the current day in a tiny relatively cheap box for the performance it seems to produce.

Link to comment
Share on other sites

Link to post
Share on other sites

Wow.

Just wow.

 

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, jaslion said:

Course take it with a huge grain of salt as well geekbench is a specific set of instructions and as proven with the regular m1 and it's current variations is not a reflection of real world performance.

 

No doubt the m1 is going to perform well. Just gotta keep in mind that you aren't buying one of the best cpu's of the current day in a tiny relatively cheap box for the performance it seems to produce.

Please explain how GB is not a indicatoon of real world performance. 
 

Enlighten me please.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Spindel said:

Please explain how GB is not a indicatoon of real world performance. 
 

Enlighten me please.

They test a limited amount of cpu benchmarks. Things even basic cpu's can excel at. Which is not representative of real world performance unless those simple calculations are the only thing ever happening. Then yeah this thing smashes but that is such an extreme outlier of a usecase it's not representative.

 

I mean basically every benchmark is a best case scenario. Load up a real world task and boom usually entirely different story. i mean the m1 is a perfect example. Absolutly dominates in geekbench but load up video editing, gaming, rendering,... and there you go a entirely different story. Cpu's it obliterated now do the opposite.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Spindel said:

Please explain how GB is not a indicatoon of real world performance.

Geekbech is just a single benchmark, so it doesn't necessarily represent all possible workloads. I've also read several times that Geekbench seems to favor macOS and the performance difference you see isn't always mirrored by other benchmarks.

 

Not trying to dis M1 though. I have an M1 Pro in my MBP at work and it is a nice little machine 🙂

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, jaslion said:

Just gotta keep in mind that you aren't buying one of the best cpu's of the current day in a tiny relatively cheap box for the performance it seems to produce.

Well, when 60% of that box is fan and heatsink, that's quite a bit of cooling capacity. Plus, Apple no longer has to worry about battery life or cooling limitations. They can push their chips as far as they want to go. Plus, the M1 base chip was and is one of the best CPUs of the current day, at the very least in terms of single core performance, something that is verified with multiple different benchmarking software suites, as well as real world performance. It is not a stretch to assume decent scaling, especially considering the fact that the M1 Pro scales so well compared to the base M1. So yes, you are buying one of the best CPUs of the current day.

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, Spindel said:

At a power draw of 65-70 W on the CPU

It’ll be more than that. The M1 Max can draw 110W at full load with a normal boost of around 60 in applications. Should be roughly double plus the iPSU and other components that draw power. Expect 250-300

Link to comment
Share on other sites

Link to post
Share on other sites

Pinch of salt indeed.

 

My Macbook Pro scores higher than my 9900K, almost as high as my 5950X, but in real-world desktop use both of those "feel" faster, they're just more responsive in general.

 

eg MacOS is terrible when dealing with network drives, feel like I'm back in the 90s with how long it can take.  Linux and Windows will open the drive in less than a second, MacOS can take minutes, and I've performed every single tweak I can find online to both MacOS and SAMBA.  Its not WiFi either, my wired Mac Mini M1 takes just as long and hard wiring the Macbook Pro makes zero difference either.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Spindel said:

Please explain how GB is not a indicatoon of real world performance. 
 

Enlighten me please.

Worry not, Past MageTank happens to be an expert on this subject:

 

He even cites sources, runs the tests and compares with others as well. I like that guy.

 

For those uninterested in clicking through the ramblings of a madman, the short of it is this: Geekbench grabs DMI/WMI information and reports it instead of probing actual clock speeds. Any clock speed listed in test results are impossible to use for comparison sake because there is simply no way to verify if they are in fact running at that clock speed. It would be like querying WMIC MemoryChip and confusing Configured Clock Speed with "Speed". Both report different values and grabbing the wrong one results in inaccuracy.

 

I can configure two identical systems, run Geekbench on both, and you'll never know I modified one system over the other. The listed information will be identical. I've done this in the past, it's just as easy to do it now.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, MageTank said:

For those uninterested in clicking through the ramblings of a madman, the short of it is this: Geekbench grabs DMI/WMI information and reports it instead of probing actual clock speeds. Any clock speed listed in test results are impossible to use for comparison sake because there is simply no way to verify if they are in fact running at that clock speed. It would be like querying WMIC MemoryChip and confusing Configured Clock Speed with "Speed". Both report different values and grabbing the wrong one results in inaccuracy.

 

I can configure two identical systems, run Geekbench on both, and you'll never know I modified one system over the other. The listed information will be identical. I've done this in the past, it's just as easy to do it now.

While that does indeed make Ghz to Ghz comparisons invalid, it doesn't inherently invalidate the results if you're comparing if one chip can do those operations faster than another, assuming both chips are hitting their rated clock speeds.

Of course the previous reasons why its not necessarily helpful as the results wont necessarily reflect in your own real-world usage are still valid.

The only truly valid comparison is when comparing different chips doing the actual tasks you will be performing.  For example Prores is great on Apple Silicon, but if you want the best quality/space efficiency AFAIK that still means using software encoders for the final render and Apple Silicon if I recall correctly is not a winner there as they're expecting you to ONLY use the media engine.

There seems to constantly be so much focus on how fast Apple Silicon is at rendering video, but no focus at all on the space efficiency of the end result.
https://developer.apple.com/forums/thread/678210

https://adobe-video.uservoice.com/forums/911233-premiere-pro/suggestions/44368392-issue-poor-hardware-encoding-quality-mac-m1-pr

So Apple Silicon is great as editing boxes, but they're not the perfect all-in-one solution.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Imbadatnames said:

It’ll be more than that. The M1 Max can draw 110W at full load with a normal boost of around 60 in applications. Should be roughly double plus the iPSU and other components that draw power. Expect 250-300

No. The M1 Max SoC draws around 90-100 W when both GPU and CPU runs at full load.
 

M1 Max CPU only is around 30 W. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Spindel said:

No. The M1 Max SoC draws around 90-100 W when both GPU and CPU runs at full load.
 

M1 Max CPU only is around 30 W. 

It’s an SoC dude there’s no such things as an “M1 Max CPU” and the “CPU” alone is more like 40. 

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, Alex Atkin UK said:

While that does indeed make Ghz to Ghz comparisons invalid, it doesn't inherently invalidate the results if you're comparing if one chip can do those operations faster than another, assuming both chips are hitting their rated clock speeds.

This is entirely the problem though. You simply do not know what clock speeds they are operating at. You cannot compare two test results, even if hardware is identical, because GB reports values obtained through DMI/WMIC. You can have a CPU listed at 4.7Ghz, but manually OC it to 5.3 and GB will still report it as 4.7Ghz. @Laysand I both proved this back in the day:

It's not just CPU frequencies either. Memory performance plays a huge part in this and I can adjust both my operating frequency and timings (primary, secondary, tertiary) and dramatically inflate my scores despite it looking like JEDEC speeds because GB decided to query the WMIC MemoryChip Get ConfiguredClockSpeed, instead of WMIC MemoryChip Get Speed. Even then, they don't know your timings either, and we all know at this point that latency is king.

 

So yeah... It does inherently invalidate the results comparing identical chips. There just isn't any value in using this software for comparisons, or any synthetic for that matter.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

It's been long known that Geekbench is biased towards Apple Silicon (through A series chip comparisons on IOS to Computers).

 

When a reputable benchmark provider shows it outperforming R9, i9, and TR (Non Pro) then I'll believe it.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Imbadatnames said:

It’s an SoC dude there’s no such things as an “M1 Max CPU” 


What 🤨

Of course there’s measurable power draw in CPU-only workloads and that’s what should be used for somewhat apples-to-apples comparisons with Intel/AMD CPUs.

Quoting “110W” (CPU+GPU workload) is misleading if we’re comparing CPUs. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Imbadatnames said:

It’s an SoC dude there’s no such things as an “M1 Max CPU” and the “CPU” alone is more like 40. 

This is nonsense. Ok, you want to compare the full SoC power draw to the CPU+GPU power draw of competitors it is not completely fair since the SoC includes more things like the memory & neural engine. But even if we ignore that, just the 12900k+3090 would use around 700w compared to the m1 ultra ~200w

 

that's 250% more power draw for less performance or the same at some tasks.

 

I don’t think anyone can argue about apple's efficiency advantage.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, HRD said:

This is nonsense. Ok, you want to compare the full SoC power draw to the CPU+GPU power draw of competitors it is not completely fair since the SoC includes more things like the memory & neural engine. But even if we ignore that, just the 12900k+3090 would use around 700w compared to the m1 ultra ~200w

 

that's 250% more power draw for less performance or the same at some tasks.

 

I don’t think anyone can argue about apple's efficiency advantage.

Most workloads should leverage both the CPU and GPU if written correctly 

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, saltycaramel said:


What 🤨

Of course there’s measurable power draw in CPU-only workloads and that’s what should be used for somewhat apples-to-apples comparisons with Intel/AMD CPUs.

Quoting “110W” (CPU+GPU workload) is misleading if we’re comparing CPUs. 

You’re not measuring just the CPU regardless though. There’s always load in other areas of the system 

Link to comment
Share on other sites

Link to post
Share on other sites

If it's anywhere close, $4000-$5000 for a whole system with RTX 3080/3090 level graphics, versus $4000 for just the Threadripper chip alone without any system to support it, looks like Apple built a bargain. (Not for all use cases, but for many.) 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Spindel said:

Please explain how GB is not a indicatoon of real world performance. 
 

Enlighten me please.

Long Story short, General Compute Chips really can't be benchmarked for every single common scenario with one benchmark. Things like Geek Bench or Cinebench do the best they can, but there is so many different scenarios out there that it is just not feasible for a benchmark to be accurate beyond a certain degree. Geekbench, in particular, is geared more for mobile devices than desktops. Thats why when LTT does videos on various processors, or GPUS, they don't just show Cinebench scores, but show its performance across a multitude of applications (usually games). The other thing to keep in mind is that the M1 series chips are SoC's which is unlike most of anything in the PC world; so typical PC benchmarking doesn't necessarily translate well. If you want to know real world performance, you need to go out there and well... see the real world performance.

 

Personally, as someone who owns a 13" M1 MacBook Pro and has seen how well the M1 performs in real life, I can say the the M1 SoC is more than powerful enough for the vast majority of people. The M1 Pro, Max, or Ultra, are far more powerful than most people need and are much better suited for truly heavy hitting applications. 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, gjsman said:

If it's anywhere close, $4000-$5000 for a whole system with RTX 3080/3090 level graphics, versus $4000 for just the Threadripper chip alone without any system to support it, looks like Apple built a bargain. (Not for all use cases, but for many.) 

I am very suspicious the claim that the M1 Ultra can go toe to toe with an RTX 3080/3090 in GPU performance. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, TheSage79 said:

I am very suspicious the claim that the M1 Ultra can go toe to toe with an RTX 3080/3090 in GPU performance. 

I actually think it may be at least believable. Consider when AnandTech did their tests:

 

126685.png.b0a12279aa882f2c8b23328f5828bd17.png

 

126683.png.57c8767cc5e8b6e994024520ed6a9779.png

 

Take the M1 Max score, increase it a bit because the M1 Max at the time was tested inside of a laptop chassis without desktop-level cooling, and then double it. 3090 territory? 

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, TheSage79 said:

I am very suspicious the claim that the M1 Ultra can go toe to toe with an RTX 3080/3090 in GPU performance. 

If you take Apple's word for it, the M1 Ultra graphics is 80% faster than a W6900X which is a 6900XT and thus the 3090. 🤣

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×