Jump to content

Peak Peek - Apple officially announces new SE, new iPad Air, M1 Ultra, Mac Studio and Studio Display

Lightwreather
Go to solution Solved by DeeKay86,

Summary

My comprehensive notes from todays Apple Special Event.

 

Here we go...

 

Apple Special Event - 8th March 2022

 

— Tim Cook on stage.

  • Starting with AppleTV+.
  • Talking about movies on AppleTV+ getting nominated for awards etc.
  • Apple Original Films: Coda, The Tragedy of Macbeth, Spirited, Luck, Argylle - Seems they have recruited every huge actor for these new Original movies. Pretty impressive!
  • “Something Exciting to share” - Friday Night Baseball - coming to AppleTV+. 2 games that you can only see on AppleTV+.

 

— Lets talk iPhone!

  • Worlds most advanced mobile operating system. A15 processor etc.
  • Introducing 2 new finishes for the iPhone 13/Pro… the leak was true. It’s GREEN. I’m not a huge fan.
  • Pre-Order the new Green this Friday. Launch March 18th.

 

— Lets talk Apple Silicon - iPhone SE!

  • Talking about how the current A15 is best in class etc.
  • A15 Bionic is coming to the NEW iPhone SE (The leak was true… again).
  • More new users have been added to using iPhone than any previous generation (pew pew pew Android lol).

— Francesca Sweet on screen now:

  • Talking about the iPhone SE.
  • Talking about the A15 Bionic and how it makes the iPhone SE awesome…
  • I don’t need to take notes on what the A15 Bionic can do… the processor is what we already have in our current iPhone 13/Pro/MAX models.
  • iPhone SE is in 3 colours - Midnight, Starlight and Product Red.
  • Toughest Glass in a smartphone, ever - Front and Back.
  • Same glass as iPhone 13/Pro/MAX.
  • It has TouchID.
  • Better battery life than before.
  • 5G.
  • 12MP Camera (single lens on back).
  • Now she is talking about iOS15. We know all this. Stop wasting my time lol.
  • Smart HDR4.
  • $429 - Pre-Order this Friday. Launch March 18th! Wow that’s cheap!!

 

— iPad:

  • iPad AIR update (yay even this was leaked. Is nothing secret anymore?)

— Angellina on screen now:

  • Performance: M1 is coming to iPad AIR!! Wow ok nice - this was only on the new iPad Pro before! Badass.
  • Faster than the fastest competitor.
  • 2x faster than the best selling laptop.
  • She’s talking a lot about the M1 processor now. We know all this already, the M1 is not new (still mega impressive though, crushes the competition).
  • 500 nits.
  • Front camera is 12MP Ultra-Wide! Supports Centre Stage! Nice!!
  • Connectivity - 5G.
  • USB-C port is now 2x faster.
  • Compatible with the Smart Keyboard folio etc.
  • Also supports Apple Pencil 2.0.
  • iPadOS 15 - we already have this so I won’t go in to it too much.
  • New release of iMovie on iPad - looks pretty cool - coming next month.
  • 100% recycled aluminium. 100% recycled rare earth elements etc.
  • Crazy how this new iPad Air is faster than my iPad Pro, and now has the same family of processor as my MacBook Pro lol. Ah, what a time to be alive.
  • Space Grey, Starlight, Pink, Purple, Blue.
  • From $599 - 64GB/256GB (damn that’s cheap for an M1 tablet!)
  • Pre-Order Friday, Launch March 18th!

 

— Mac!

  • Talking about how M1/M1Pro/M1Max MacBooks have no equal. He’s right you know lol.

— John Ternus on screen now talking about Mac.

  • M1 ULTRA announced! - Jeez man I just bought the M1Max MacBook Pro for gods sake.
  • This is “for the desktop” - phew!!
  • Starts with M1Max… ok let’s see where this is going!
  • M1Max “has a secret!”
  • Wow, M1Max has a die-to-die connectivity feature!! They can connect 2 M1Max chips to make an ULTRA Processor!
  • UltraFusion Architecture!
  • 2.5TB/s!
  • More than 4x performance of competition!
  • Massive bandwidth and efficiency!
  • 114Billion Transistors - Most EVER in a computer Chip.
  • 800GB/s - 10x faster than the latest PC chip!
  • 128GB Unified Memory!!!! Holy crap.
  • 20-core CPU. 16 High Performance and 4 High Efficiency.
  • 64-Core GPU. 8x faster than M1.
  • 32-Core Neural Engine cores.
  • 2x capable Media Engine (wow).
  • Industry leading performance per watt.
  • Uses 65% less power for more performance compared to PC equivalent!
  • This is game changing.
  • Industry leading security.
  • Software see’s the M1Ultra as a single piece of silicon!
  • Developers are now talking about the M1Ultra. They all love it. Surprised? Nope.

 

— Here we go - Talking about “The Studio”. The leak is obviously real lol.

  • Introducing and announcing… the MAC STUDIO!! - and STUDIO DISPLAY.
  • Ok that is pretty stunning. Wow.
  • Uses M1Max and M1ULTRA.

— Colleen Novielli on screen now.

  • Design: Exterior is 7.7inches Square. 3.7Inches high. Single piece of Aluminium.
  • Sucks in air at the base.
  • 2 fans at the top.
  • Rear exhaust.
  • It is super efficient and quiet. You will barely ever hear it.
  • Connectivity: 4 Thunderbolt 4 ports.
  • 10GB Ethernet (nice!!)
  • 2x USB-A.
  • HDMI.
  • Pro-Audio 3.5mm jack.
  • WiFi 6.
  • Bluetooth 5.
  • SDXC-Card Reader at front.
  • 2x USB-C 19Gbps at front / Thunderbolt 4.
  • 4x Pro Display XDR support + 4K TV.
  • Performance: 50% faster than Mac Pro with Xeon - Ouch to everyone who bought that one.
  • 90% faster than 16-core Mac Pro with Xeon and 60% faster than the Mac Pro with 28-core Xeon! Wtf seriously wow.
  • Ok we get it, M1Ultra is INSANE lol - this is just embarrassing the competition at this point. Wow.
  • 48GB Video Memory! WTF.
  • Up to 128GB Unified Memory.
  • 7.4GB/s SSD - up to 8TB!
  • M1Ultra 800GB/s memory bandwidth. (400GB/s for M1Max version).
  • 18 steams of 8K ProRes 4.2.2 video. No other computer in the world can do this. Holy hell.
  • M1Max Mac Studio is 3.4x faster than the fastest iMac.
  • M1Ultra Mac Studio is 80% faster!
  • I have to say, the Mac Studio is a stunning bit of kit. It’s so damn sexy! I’m happy with my M1Max MacBook Pro though! Woop!
  • Uses far less energy than competitors. 100% recycled rare earth elements etc.

— Talking Mac Studio Display: Nicole on screen now.

  • Design: All screen design - but it has bezels - (pretty big bezels…!)
  • Slim profile.
  • 30 degrees of tilt.
  • You can add a tilt/height stand if you like (extra add-on).
  • VESA adapter option (extra add-on lol).
  • 27” 14.7million 218PPI. 5K Retina!
  • 600 nits.
  • P3 wide colour gamut.
  • TrueTone.
  • Anti-Reflective coating.
  • Nano-texture glass option (add-on haha).
  • A13 Bionic is BUILT IN to the display - surprising. Pretty cool!
  • Camera and Audio system. 12MP Ultra-Wide (same as iPad).
  • Supports Centre Stage (on Mac for the first time).
  • 3 “studio quality” mic array.
  • 6 speaker sound system: 4 subwoofers/2 high performance tweeters.
  • Multichannel surround sound with spatial audio / Dolby Atmos.
  • By far the highest fidelity speakers ever in a Mac.
  • Best combo of Camera and Audio ever in a desktop display.
  • 3x USB-C 10Gbps.
  • 1x Thunderbolt 4 port the provides 96w of power for charging MacBook.
  • Connect 3 Studio Displays to MacBook Pro.
  • New Silver and Black colour options for Magic Keyboard and Trackpad etc.
  • 100% recycled rare earth elements etc.
  • Pair it with Any Mac - like MacBook Pro, AIR, Mini and Studio.
  • Showing off the Mac Studio and Studio Display with a very cool little video lol.
  • M1Max Mac Studio STARTS FROM $1999 / M1Ultra Mac Studio STARTS FROM $3999.
  • Studio Display starts at $1599 - configure it up from there.
  • Pre-Order NOW - Available on March 18th.
  • Mac PRO - Coming SOON - but they will NOT show it today! (Seriously, how much better can it get? This is insanity! The “Pro” is going to be disgustingly powerful).

 

— Tim Cook black on stage.

  • Show close.

 

Notes by: DeeKay86. I hope you enjoyed my notes! I am a bit of a freak and I do this for pretty much every large event many companies put on. I won't advertise my socials here as I think its not allowed lol.

10 minutes ago, RedRound2 said:

Their onw deadline for transition is October or something of this year. So it has to come before that. Plus since they basically teased it, they probably have it very close to ready.

And they again specifically said this is the last M1 chip, so that's why I think it may be M2 (or maybe something completely different)

 

Their original deadline was “2 years”, spoken at WWDC 2020, so that would make it WWDC 2022.

 

Maybe they will tease the new MacPro at WWDC 2022 with actual availability in late 2022? They did exactly that at WWDC 2019 and WWDC 2013 for those then new MacPros: available “later this year” or “this fall”.

 

Yes I too believe it would be M2 based and offered in a 40-core 4-chiplets configuration. M1 can’t do that. 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, HenrySalayne said:

Ehh - no. It's one lazy approach.

Which doesn't mean it's not practical and Apple probably saves a little bit on the tooling costs.

But it's not the approach Apple is renowned for. I'm quite sure they will not stick to this design for a considerable amount of time.

Apple is known for not being all over the place with designs of Macs, not sure where you got the notion that they’re renowned for changing designs for the sake of it. 

They kept the same MacMini design for 12 years now.

And now the MacStudio sports that same design language.

Sounds super Apple-y to me. 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, saltycaramel said:

 

They rebrand the iPhone SoC every year and that’s an even more consumer-oriented product, pretty sure they will brand the new architecture on Macs too.

 

The Phone is a self-contained product which has functionality added to it every year. The only reason they don't keep calling it A-series is because their CPU's are only one configuration. The difference between the regular and X configuration was an additional GPU core for the iPad models. I expect all iPads going forward to be M1 parts.

 

The M1's however have more than one configuration, so there's no need to make a "M2" if the only difference between Pro/Max/Ultra is core/gpu numbers.

 

The A series are fundamentally the same since A11 (2P, 4E), with pretty much all standard features being in the A12 going forward. For all intents there's no significant change between A12,A13,A14,A15, even though the A15 has twice as many transistors as the A12. The A12Z was also the "M1" in the transition kit Macmini.

 

 

Hell, the A8 is where Apple started using the ARMv8 in the first place. However every upgrade since has suffix'd that with .0 .1 .2, etc.

https://developer.arm.com/documentation/102378/latest

Quote
Armv8.1-A
  • Atomic memory access instructions (AArch64)
  • Limited Order regions (AArch64)
  • Increased Virtual Machine Identifier (VMID) size, and Virtualization Host Extensions (AArch64)
  • Privileged Access Never (PAN) (AArch32 and AArch64)
Armv8.2-A
  • Support for 52-bit addresses (AArch64)
  • The ability for PEs to share Translation Lookaside Buffer (TLB) entries (AArch32 and AArch64)
  • FP16 data processing instructions (AArch32 and AArch64)
  • Statistical profiling (AArch64)
  • Reliability Availability Serviceabilty (RAS) support becomes mandatory (AArch32 and AArch64)
Armv8.3-A
  • Pointer authentication (AArch64)
  • Nested virtualization (AArch64)
  • Advanced Single Instruction Multiple Data (SIMD) complex number support (AArch32 and AArch64)
  • Improved JavaScript data type conversion support (AArch32 and AArch64)
  • A change to the memory consistency model (AArch64)
  • ID mechanism support for larger system-visible caches (AArch32 and AArch64)
Armv8.4-A
  • Secure virtualization (AArch64)
  • Nested virtualization enhancements (AArch64)
  • Small translation table support (AArch64)
  • Relaxed alignment restrictions (AArch32 and AArch64)
  • Memory Partitioning and Monitoring (MPAM) (AArch32 and AArch64)
  • Additional crypto support (AArch32 and AArch64)
  • Generic counter scaling (AArch32 and AArch64)
  • Instructions to accelerate SHA
Armv8.5-A and Armv9.0-A
  • Memory Tagging (AArch64)
  • Branch Target Identification (AArch64)
  • Random Number Generator instructions (AArch64)
  • Cache Clean to Point of Deep Persistence (AArch64)
Armv8.6-A and Armv9.1-A
  • General Matrix Multiply (GEMM) instructions (AArch64)
  • Fine grained traps for virtualization (AArch64)
  • High precision Generic Timer
  • Data Gathering Hint (AArch64)

So if we're going by that pattern, The next features in the A16 will be the GEMM, and High Precision Generic Timer

Quote
Armv8.7-A and Armv9.2-A
  • Enhanced support for PCIe hot plug (AArch64)
  • Atomic 64-byte load and stores to accelerators (AArch64)
  • Wait For Instruction (WFI) and Wait For Event (WFE) with timeout (AArch64)
  • Branch-Record recording (Armv9.2 only)
Armv8.8-A and Armv9.3-A
  • Non-maskable interrupts (AArch64)
  • Instructions to optimize memcpy() and memset() style operations (AArch64)
  • Enhancements to PAC (AArch64)
  • Hinted conditional branches (AArch64)

Then the A17 will add PCIe Hotplug... something you'd want in a desktop with PCIe slots. So that's where I'd expect an "M2" CPU. The M1 is Armv8.4, same as the A13.

 

The new ipad air's? M1. New iphone 13/13 pro/SE A15.

 

The phone's have smaller GPU's because they have less screen real estate, and have no means of adding any, since the lightning connector doesn't let it connect to something bigger.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

Honestly, the lack of AV1 encoding is the reason why I'm not buying new hardware right now.

Out of interest what leads you to AV1 over H265?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, hishnash said:

For sure, but worth noting the GPU in the M1 Ultra is already faster than any single GPU they sell for the macPro (its not faster than the duo but those are multi gpu).  So as a drop in for WWDC this year they could use the M1 Ultra with M1 ultra based add in GPUs (for those that need more than the gpu on the SoC) and then next year when they have a M2 Ultra (or exteram) release that. Rather than waiting another year and missing there deadline. 

Well I bet it'll eat the Duo but hey, proof in the pudding that is coming soon.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, HenrySalayne said:

It's not the same design language. Proportions matter. This is a 2:2:1 cuboid, the Mac Mini is a 5:5:1 cuboid.

Take a look at every single product they made in the past decade and I doubt you'll find anything else that's this chunky.

Seen from the top it literally looks exactly like the 2010 MacMini and it’s not the same design language?

It’s unprecedentedly chunky because this is an unprecedented product category for Apple in the last 20 years. Doesn’t happen every day. This is the first really “new” Mac in a while. 

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, Kisai said:

Then the A17 will add PCIe Hotplug... something you'd want in a desktop with PCIe slots. So that's where I'd expect an "M2" CPU. The M1 is Armv8.4, same as the A13.

 

M1 has PCIe hot plug this is a required feature to support TB. since TB supports PCIe hot plug!   The ARM instruction set version number is really a minimum spec of supported features you can have a chip with Armv8.1 that supports a load of instructions from ARMv9 but unless it suppose all the required features of ARMv9 you can't call it v9. Apples ARM cpus contain a lot of these plucked features from future arm versions and thier own additions as well. 


 

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Kisai said:

The M1's however have more than one configuration, so there's no need to make a "M2" if the only difference between Pro/Max/Ultra is core/gpu numbers.

 

There’s no need? There’s super need to stress the architectural change. 

 

The industry has been plagued for years by people strolling in a mall shopping for a laptop that are pushed to believe that an Arrandale i7 is better than an Alder Lake i3. 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, hishnash said:

Out of interest what leads you to AV1 over H265?

Because OBS won't enable it, and transcoding h264 to h265 and then uploading to youtube only to have it turned into vp9 is too many layers of lossy compression for my tastes. Basically everything you upload to youtube from mid-2022 should be AV1 to avoid multiple layers of transcoding.  So when hardware supports AV1 encoding, that removes several intermediate transcoding phases. 

 

I expect Twitch and Youtube to switch AV1 streaming once there is enough decoding hardware out there.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

LMG coverage of the event and the Mac Studio: it’s not a Steam Deck so let’s generally be debbie downers. 

 

The x86 PC industry (and its well connected tech influencers, like LMG) will not be able to keep the facade that this isn’t a shift of power in the making for much longer..I warned about this back when the M1 was dismissed as “it’s only about efficiency” and “glorified iPad” and giga-dies like the M1 Max and M1 Ultra were just fantasies and far fetched theories..now the time for reckoning is coming for x86 space heaters. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Kisai said:

Because OBS won't enable it,

Even with the recent patches to OBS that provided much better macOS screen capture using the new apis in 12.3 of the features of these apis is the ability to directly pipe them into the Video Toolbox to get a stream out. 

Im sure YouTube de-code and re-encode the AV1 you send them, there is no way they just stream that out byte by byte it would such a big security risk if someone figures out a vuratiblty in a devices AV1 decoding path (this is has happened with multiple other codecs in the past). It would be in-resposible to not decode it out to raw frames then re-encode it in a known safe encoder. 


 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, hishnash said:

Even with the recent patches to OBS that provided much better macOS screen capture using the new apis in 12.3 of the features of these apis is the ability to directly pipe them into the Video Toolbox to get a stream out. 

Im sure YouTube de-code and re-encode the AV1 you send them, there is no way they just stream that out byte by byte it would such a big security risk if someone figures out a vuratiblty in a devices AV1 decoding path (this is has happened with multiple other codecs in the past). It would be in-resposible to not decode it out to raw frames then re-encode it in a known safe encoder. 


 

Compression is always GIGO. The higher quality thing you can upload leads to a higher quality result. Also I'm pretty sure transcoding at google is nothing more than ffmpeg, because when bugs have cropped up, they are the exact bugs in ffmpeg.

 

Here's what I'd like youtube to fix to make things less of a pain in the ass:

1. Let me upload video in the exact way I want it to be streamed at. Run it through a demux if you have to to make it fit the streaming format, but leave it alone.

2. When the viewer picks anything other than Source, then push that through the real time transcoder with a stereo audio mix, because I know that's what they do right now. 

3. Let me upload one 12 hour video and then let me say where to cut/split it, before processing it. There are times where I've uploaded something straight from a stream VOD, and because it was 15 seconds too long, I can't make an edit to it, or have it get rejected for being "too long" entirely. Just if it's over 4 hours, put a "split at hh:mm:ss.ttt" or "automatically split this video at the first silent break after every X timestamp"

4. When I upload video with multiple audio tracks, and ContentID finds something to complain about, run the same check on the other audio tracks at that same point and give me the option to use the track without the ContentID. Especially for content originally streamed on Twitch, where Track 1 is the stream audio and track 2 is the VOD audio. I also have Track's for the subject audio, my audio, other people's audio, and sound effects used on stream. 

 

Youtube clearly does stream the videos from something they have stored already, but you can test this yourself by uploading non-compliant video (Eg lossless compression, and 4:4:4 video) and see what Youtube does to it.

 

So what I want is the hardware AV1 encoder so that I can remove all the middle-steps between OBS and Twitch/Youtube. One of the things that happens at present is that things done on the input side of OBS don't translate to the output side. 

 

On a Mac, at least, a lot of that hardware encoding stuff is straight forward and available to everything, where as on windows, you pretty much have to piecemeal purchase a decode codec support for the OS to support something, and then you still have individual applications that want their pound of flesh to be able to encode it too.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I’m actually quite pleased Apple have ported the higher spec Mac Mini to its own distinct product line. The vast majority of consumers honestly wouldn’t need anything more than the M1 Mac Mini anyway (self repair / modular upgrades aside which is a separate conversation).

 

Knobbling the cooling solution to cram the Max/Ultra in to the existing Mac Mini would result in many people buying or window shopping something they don’t even need and then complaining about the price.

 

My only complaint about the SoC naming convention is the use of Pro and Max in the name, because saying “M1Max MacBook Pro”, “M1Pro MacBook Pro” just sounds stupid.  Otherwise I have no real issue with the “{generation} {performance verb}” approach, because honestly I hate the way Intel do it, especially when you also throw in the three types of motherboard chipsets in to the mix.

 

M1 Ultra looks really intriguing, I’m not even tempted to window shop it though because I simply have no need for that level of claimed performance, I have a 14” M1Max and it’s already more than I need in reality.

 

I’m a bit disappointed in the price of the studio display, that’s the kind of product I do want as a software engineer who works from home, but not at that price (unless I can convince work to pay for it)

Link to comment
Share on other sites

Link to post
Share on other sites

GPU performance charts IMO need to be treated with healthy scepticism, if they’re based at all in reality it’ll be a specific workflow with specific software written correctly against the Metal API, since Apple have zero transparency over what tests they do to get those graphs then they become essentially meaningless.

 

One thing is for sure, they’re not talking game engine rasterisation via Unity or Unreal.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Kisai said:

The M1's however have more than one configuration, so there's no need to make a "M2" if the only difference between Pro/Max/Ultra is core/gpu numbers.

 

The A series are fundamentally the same since A11 (2P, 4E), with pretty much all standard features being in the A12 going forward. For all intents there's no significant change between A12,A13,A14,A15, even though the A15 has twice as many transistors as the A12. The A12Z was also the "M1" in the transition kit Macmini.

Are you making the argument that since Sandy Bridge Intel should not have incremented the product numbers for each successive archecture and everything should have been a Intel i7 2700K? Because it's sounding a lot like that, a lot.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, saltycaramel said:

LMG coverage of the event and the Mac Studio: it’s not a Steam Deck so let’s generally be debbie downers. 

 

The x86 PC industry (and its well connected tech influencers, like LMG) will not be able to keep the facade that this isn’t a shift of power in the making for much longer..I warned about this back when the M1 was dismissed as “it’s only about efficiency” and “glorified iPad” and giga-dies like the M1 Max and M1 Ultra were just fantasies and far fetched theories..now the time for reckoning is coming for x86 space heaters. 

Apple's larger SoCs are not that far ahead in power draw, might want to be careful throwing around space heater quips when they are using similar amounts of power 😉

 

Not every x86 CPU is a 5950X or 10900K/11900K/12900K with the power limits removed.

 

5nm M1 Pro/Max is not so far ahead in CPU performance or power to 7nm Zen3/Zen3+ at the same power levels aka mobile.

 

x86 is not at all a huge limiting factor to many things, not enough for greater differences due specifically and solely to that.

 

Apple's significant leads are from careful planning and ecosystem integration with hardware designs that create an overall product solution. An M1 SoC running Linux is quite different to an M1 SoC actually running Mac OS.

 

Windows

r_2113086_CGM9D.jpg

 

Mac OS

3w43vj.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, hishnash said:

Im sure YouTube de-code and re-encode the AV1 you send them

You're correct, they do.

It does not really matter what format you upload to Youtube, they will convert it. Format isn't even that important to video quality. You can have awful looking AV1 files, and great looking H.264 files. Hell, H.264 can be lossless if you want (and yes, is is compliant with the H.264 specs).

AV1 to AV1 transcoding is not really any different from H.264 to AV1 transcoding. 

 

My guess is that the first batch of consider grade AV1 hardware encoders won't even be that good, so it will be questionable if you even want to use them. It took quite some time for things like H.264 hardware encoders to reach decent quality, and I suspect it will be the same for AV1. 

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, leadeater said:

5nm M1 Pro/Max is not so far ahead in CPU performance or power to 7nm Zen3/Zen3+ at the same power levels aka mobile.

That does not match what I have seen. Do you have any source on that?

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, LAwLz said:

That does not match what I have seen. Do you have any source on that?

Handbrake, Blender (pure CPU), Matlab, Excel (eh MS), 7zip (it leads and lags). These are all what I would class as "non Apple software", as in software that's not really explicitly following Apple's guidance and staying current while utilizing all possible parts of the SoC so is using more rudimentary CPU and Memory instructions and access methods. Adobe also in a way falls in to this bar but it actually does utilize the GPU in the SoC so is highly competitive in performance while using a lot less power than classic CPU + dGPU. 

 

 

 

Without Mac OS in the mix with compliant software I find the comparisons between the two opposing ecosystems as rather fruitless, it bears little to no meaning and neither will each impact each other in reality. Aside from the willingness to move from PC/Windows to Apple/MacOS.

 

https://browser.geekbench.com/v5/cpu/12990416

https://browser.geekbench.com/v5/cpu/13280556

https://browser.geekbench.com/v5/cpu/13327959

https://browser.geekbench.com/macs/macbook-pro-16-inch-2021-apple-m1-max

 

Edit:

Overall the biggest and most significant difference is the single thread power draw, AMD allows MUCH higher power draw than what Apple needs and does. Total package power for sustained load is nigh identical and performance not worth squabbling about. More benefits to a Mac device than these more targeted and isolated comparisons.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

Are you making the argument that since Sandy Bridge Intel should not have incremented the product numbers for each successive archecture and everything should have been a Intel i7 2700K? Because it's sounding a lot like that, a lot.

No you silly. They should have just called every part i3, i5 or i7 and short of an actual change in the feature set, dispense with the XXXXX numbers and letters.

The biggest problem Intel has is that the numbers mean nothing, just like with AMD's CPU/GPU's and NVIDIA's GPU's.

 

Here, I'll even spell it out.

 

2010(2nd gen):

i1 = Dual Core, no hyper threading, no boosting (eg what's currently Pentium, Celeron and Atom, and formerly Core Solo/Core 2 Duo)

i3 = Dual Core, hyper threading, no boosting

i5 = Quad Core, no hyper threading, no boosting

i7 = Quad Core, hyper threading, no boosting

i9 = Quad Core, hyper threading, boosting

Xeon E3 = same as i7 part, but ECC memory

 

2017 (8th gen):

i1 = Dual Core (Y/U and Atom/Celeron/Pentium brands)

i3 = Quad Core, no hyper threading (former i5 line)

i5 = Six Core, no hyper threading

i7 = Six Core, hyper threading

i9 = Six Core, hyper threading, boosting

 

2021 (12th gen):

i1 = Quad Core

i3 = Six Core

i5 = Eight Core 

i7 = Eight Core + Hyperthreading

i9 = Eight Core + Hyperthreading + Boosting

 

Nobody needs to own an i9 part. Intel already calls things "nth gen", so that's all they ever needed to use in marketing. Apple could have been calling the iPhone parts A1 and the iPad/Mac parts M1, the entire time and only distinguishing them with "generation" the same way. AMD and NVidia likewise.

 

The problem, I'm pointing out here is that those multi-digit numbers and alphabet soups mean absolutely nothing, and when you start getting confronted with number-letter is better than other number-letter, the customer just buys the wrong part because the sales people don't even know.

 

So, I'd fix nvidia and AMD's GPU's the same way. Just say what generation it is, and then use something like below:

NV9 (x90 parts)

NV8 (x80 parts)

NV7 (x70 parts)

NV6 (x60 parts)

NV5 (x50 parts)

NV3 (x30 parts)

NV0 (for parts that have no video output and have no use to consumers)

 

If you want a NV8 part, you can replace with a current generation or next generation NV8 part and expect the same-or-better experience. That's all I'm asking. It also makes minimum requirements for software actually make sense by spelling out what tier of part is necessary.

 

If you're a nitpicky nerd, and want a specifically binned model, then yes go find a store that went through the effort to list the full feature-set. But we're basically looking for things like "K and "F" and all that to stop being a thing to look for. If you want a K or F feature set on that chip, you order your CPU from a store that specializes the the bin'd models. But to everyone else how do they know if an i5-12400 or i5-12600 or a i7-12700 has any meaningful difference? Because I assure you it does not, those model numbers tell you ZERO about the CPU.

(The i7-12700 is a 8+4 core, 20 thread, UHD770 ; the i5-12600 is a 8+0, 12 thread, UHD 770; The i5-12400 is an 8+0, 12 thread, UHD 730)

And then we gave intel shit for this:

10th gen 1065 g7 sku

https://www.intel.ca/content/www/ca/en/processors/processor-numbers.html

 

That ALSO doesn't mean anything, and is if anything, even more confusing.

Quote

SKU Numeric Digits

For the majority of Intel® processors, the final three digits of the product number are the SKU. SKUs are generally assigned in the order in which processors in that generation and product line are developed. A higher SKU within otherwise-identical processor brands and generations will generally have more features. However, SKU numbers are not recommended for comparison across different generations or product lines.

 

Even Intel says they don't mean anything.

 

Suffix

Meaning

G1-G7

Graphics level (processors with new integrated graphics technology only)

E

Embedded

F

Requires discrete graphics

G

Includes discrete graphics on package

H

High performance optimized for mobile

HK

High performance optimized for mobile, unlocked

HQ

High performance optimized for mobile, quad core

K

Unlocked

S

Special edition

T

Power-optimized lifestyle

U

Mobile power efficient

Y

Mobile extremely low power

X/XE Unlocked, High End
B

Ball Grid Array (BGA)

 

The letters don't mean anything at all to someone unless they've been told to get or avoid a specific one.

 

Even if I were to want something more complicated because of a nerdy compulsion to want to put things into boxes, I'd settle for this:

 

Intel (brand) - C(generation)[P(Whole Performance Cores)E(Energy Efficient Cores)T(total threads)G(GPU Performance level see Nvidia tiers above)]

Intel i9 -C12[P8E8T24G0] for what would otherwise be a i9-12900F

Intel i5 -C12[P6E0T12G0] for what would otherwise be a i5-12400F

But that number and alphabet soup needs to mean something that I can immediately translate to "yes P8E8T24G0 is better than P8E0T12G0"

 

The part on the left is the marketing, the part on the right is for nerds. Apple makes no distinction between why any part is better than the other because you don't get to choose. The Max, Pro and Ultra, are meaningful marketing-wise, but they don't really help understand why they are.

If Apple sold CPU's as more than a BYO option, then yes, you'd have something like this:

M1[P4E4G7M16]

M1[P4E4G8M16]

M1[P6E2G14M16] (Pro)

M1[P8E2G16M16] (Pro)

M1[P8E2G24M16] (Max)

M1[P8E2G32M16] (Max)

M1[P16E4G64M32](Ultra)

 

But Apple doesn't need to make that distinction because each BYO device only has like two options.

image.thumb.png.716e295bd800a6e6da1b16be2550874e.png

The only thing they don't make a distinction of is the Performance/Efficient cores.

 

Meanwhile, dell:

image.thumb.png.2e43de341da3fa3788356b98613524d3.png

No core count, but for some reason the iGPU part matters?

image.thumb.png.aff2df9949c09473733ee43714328dad.png

Both of these are i3's, but the the CPU number means absolutely nothing here

An i3-10105 is a 4-core, 8 thread CPU

an i3-12100 is a 4 performance core, 0 E core, 8 thread CPU.

For all intents these may as well be the same CPU, is that worth a $20 discount (ignoring the HDD)

Yet, benchmarks say no, the 12th gen is 60% greater:

image.png.933420061e45c313b3bbf27feb3ddbe8.png

 

 

If it were up to me, we would find a benchmark that can't be cheated, and put the "minimum achievable" performance right on the box, so when you see a dozen of these things side-by-side on the shelf, even comparing a previous generation model can be compared. 

 

But I digress, marketing people want big numbers, nerds want every last feature spelled out in the model number. The right middle ground there has to be the thing for people who walk into a retail store and can just compare the two most significant numbers and not spend an hour researching things before they got to the store. 

 

Apple does all that for you. You get two choices.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I wonder if the M1 Ultra has it's own hidden UltraFusion™ connectors to link 2 of them side by side, giving us M1 Voltron. (I'm only joking about the name...I wonder how much this tech can do)

🖥️ Motherboard: MSI A320M PRO-VH PLUS  ** Processor: AMD Ryzen 2600 3.4 GHz ** Video Card: Nvidia GeForce 1070 TI 8GB Zotac 1070ti 🖥️
🖥️ Memory: 32GB DDR4 2400  ** Power Supply: 650 Watts Power Supply Thermaltake +80 Bronze Thermaltake PSU 🖥️

🍎 2012 iMac i7 27";  2007 MBP 2.2 GHZ; Power Mac G5 Dual 2GHZ; B&W G3; Quadra 650; Mac SE 🍎

🍎 iPad Air2; iPhone SE 2020; iPhone 5s; AppleTV 4k 🍎

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Kisai said:

No you silly. They should have just called every part i3, i5 or i7 and short of an actual change in the feature set, dispense with the XXXXX numbers and letters.

The biggest problem Intel has is that the numbers mean nothing, just like with AMD's CPU/GPU's and NVIDIA's GPU's.

 

Here, I'll even spell it out.

 

2010(2nd gen):

i1 = Dual Core, no hyper threading, no boosting (eg what's currently Pentium, Celeron and Atom, and formerly Core Solo/Core 2 Duo)

i3 = Dual Core, hyper threading, no boosting

i5 = Quad Core, no hyper threading, no boosting

i7 = Quad Core, hyper threading, no boosting

i9 = Quad Core, hyper threading, boosting

Xeon E3 = same as i7 part, but ECC memory

 

2017 (8th gen):

i1 = Dual Core (Y/U and Atom/Celeron/Pentium brands)

i3 = Quad Core, no hyper threading (former i5 line)

i5 = Six Core, no hyper threading

i7 = Six Core, hyper threading

i9 = Six Core, hyper threading, boosting

 

2021 (12th gen):

i1 = Quad Core

i3 = Six Core

i5 = Eight Core 

i7 = Eight Core + Hyperthreading

i9 = Eight Core + Hyperthreading + Boosting

Where and how exactly do you put in the product model number the generation or archecture. Also that is literally impossible to whittle down the product models to that few ergo you need more than just simple iX numbering.

 

The numbers in fact do mean a lot, if you don't know Intel literally offers a document/guide on how to read them. Not that you need to, it's extremely obvious.

 

Anyone not willing to figure it out or look at them can and will default to "i7/i9 is best" and just buy the current one, they aren't going to lose out in any way.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Video Beagle said:

I wonder if the M1 Ultra has it's own hidden UltraFusion™ connectors to link 2 of them side by side, giving us M1 Voltron. (I'm only joking about the name...I wonder how much this tech can do)

There is no "M1 Ultra", It's two M1 Max's on an interposer using the known but not officially confirmed chip interconnects. The existence of them has been known for quite some time. There are no more interconnects on the M1 Max dies so the Ultra is the current maximum using M1 Max dies.

 

Also M1 Plaid

gQhcVjt.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, leadeater said:

Handbrake, Blender (pure CPU), Matlab, Excel (eh MS), 7zip (it leads and lags).

The issue with comparing between arcs with these tools is they are extremely well optimised for x86, making extensive use of different cpu feature sets and private vendor specific extensions (AVC512 etc). But on ARM they do not make use of much if any of the equivalent optional features. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, hishnash said:

The issue with comparing between arcs with these tools is they are extremely well optimised for x86, making extensive use of different cpu feature sets and private vendor specific extensions (AVC512 etc). But on ARM they do not make use of much if any of the equivalent optional features. 

Yep, that's why I said they are "non Apple software"

 

However Geekbench and CB illustrate the same point. Comparing unconstrainted desktop CPUs made to HIT THE NUMBERS to constrained CPUs/SoCs is probably the most flawed thing to do.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×