Jump to content

October 18th Apple Event - Unleashed - Apple Silicon, MacBook Pro upgrades, HomePod mini, AirPods 3rd Generation

BondiBlue
Go to solution Solved by BondiBlue,

Summary

The Apple Unleashed event is over! Here are the new products that were announced:

  • AirPods
    • New AirPods 3rd Generation: MagSafe wireless charging, Adaptive EQ, and longer battery life
  • HomePod mini
    • In addition to Space Gray and White, HomePod mini now comes in Blue, Yellow, and Orange
  • Apple Music
    • New Voice Plan starts at $4.99/month, allows for Apple Music through Siri, including new custom playlist
  • And yes, new Macs and Apple Silicon
    • The M1 chip is now part of a lineup of three SoC designs, including the M1, M1 Pro, and M1 Max
    • The MacBook Pro has been redesigned, bringing back more ports, MagSafe charging, better battery life, and more
      • The 14" MacBook Pro starts at $1999, and the 16" starts at $2499. The 13" M1 MBP is now the base model
      • Support for up to 64GB of unified memory and 8TB of flash storage
      • M1 Pro and Max both have 10 CPU cores, and M1 Max can have up to 32 GPU cores
      • Fast charging has been added to the MacBook Pro, allowing for up to 50% charge in only 30 minutes

 

My thoughts

I'm really excited for the new MacBook Pros. I plan on upgrading to a new 16" MacBook Pro within the next couple months, and I can't wait. 

 

Sources

Apple Events

The Verge

The combination of a supposed MacMini “Pro” with M1 Pro/Max and a supposed (for the first time since the 2011 Thunderbolt Display) reasonably priced Apple-branded display sounds so good that the question would be: where’s the catch? 

 

The question is motivated by the fact that Apple for the longest time has gatekeeped the “ultimate” desktop experience to iMacs. The closest thing lately would have been a 2018 Intel Coffee Lake MacMini + 64GB sodimm RAM + AMD 5700 XT in an eGPU enclosure + LG Ultrafine 5K (2nd hw revision with display port 1.4). But it would still lag behind iMacs in terms of CPU. That would change with an M1 Pro/Max Mini. 

 

Maybe the catch will be that the 27” Apple Thunderbolt Display 2022 is a miniLED 60Hz whereas the 27” iMac Pro has a miniLED ProMotion 120Hz. Not even sure 5.5K 120Hz would be possible for the external display over a single cable with current ports.

 

On the other hand, lately Macs feel like the only catch is that there’s no catch (except the fact you have to pay upfront for any ram/ssd upgrade). So maybe this new “have your cake and eat it” “it’s thicker, so what” Apple won’t mind if some Mini configurations will eat into iMac sales. 

Link to comment
Share on other sites

Link to post
Share on other sites

Well, the review many people have been waiting for is finally here:

 

 

Phobos: AMD Ryzen 7 2700, 16GB 3000MHz DDR4, ASRock B450 Steel Legend, 8GB Nvidia GeForce RTX 2070, 2GB Nvidia GeForce GT 1030, 1TB Samsung SSD 980, 450W Corsair CXM, Corsair Carbide 175R, Windows 10 Pro

 

Polaris: Intel Xeon E5-2697 v2, 32GB 1600MHz DDR3, ASRock X79 Extreme6, 12GB Nvidia GeForce RTX 3080, 6GB Nvidia GeForce GTX 1660 Ti, 1TB Crucial MX500, 750W Corsair RM750, Antec SX635, Windows 10 Pro

 

Pluto: Intel Core i7-2600, 32GB 1600MHz DDR3, ASUS P8Z68-V, 4GB XFX AMD Radeon RX 570, 8GB ASUS AMD Radeon RX 570, 1TB Samsung 860 EVO, 3TB Seagate BarraCuda, 750W EVGA BQ, Fractal Design Focus G, Windows 10 Pro for Workstations

 

York (NAS): Intel Core i5-2400, 16GB 1600MHz DDR3, HP Compaq OEM, 240GB Kingston V300 (boot), 3x2TB Seagate BarraCuda, 320W HP PSU, HP Compaq 6200 Pro, TrueNAS CORE (12.0)

Link to comment
Share on other sites

Link to post
Share on other sites

Just a thing:

Apple has been using benches that favour memory bandwidth. specviewperf8_fp is still impressive, but apple's slides use specviewperf8_int which favour memory bandwidth. So, for all the NUMA optimized workloads(basically all PC ports) won't perform as well as UMA workloads. Nothing that affects the normal user though.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/4/2021 at 10:23 AM, saltycaramel said:

Maybe the catch will be that the 27” Apple Thunderbolt Display 2022 is a miniLED 60Hz whereas the 27” iMac Pro has a miniLED ProMotion 120Hz. Not even sure 5.5K 120Hz would be possible for the external display over a single cable with current ports.

Not using standard protocols no, apple could if they wanted be very apple and build their own (higher compression) display stream compression (after all the M1 Pro/Max have pro-res encoders) but if they just used standard DisplayPort over TB4 then no you cant run a 5.5k 120Hz display and you absolutely cant run it if you also want that display to have some USB-C ports on it that users can use to things (something that is very likely).

If apple did build their own protocol the screen would still support DisplayPort over TB so that other macs can display a 60hz image. But getting other devices (non macs) to work with it will be just as hard as getting a ProDisplay XDR working on other devices. Very few TB ports on other devices are wired up to support the highest end DisplayPort tunnelling (that is within the DP spec).

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/4/2021 at 10:23 AM, saltycaramel said:

Apple won’t mind if some Mini configurations will eat into iMac sales. 

I don't think apple really care if one of their products eats another products sales, this has been their go to thing for years, just look how the iPhone at up the iPod.  They know it is much better to canalise your own product that hold back and let your competition have a shot.  

Link to comment
Share on other sites

Link to post
Share on other sites

Now I'm rather confused. I wanted to recommend the M1 Pro for 3D rendering and Unreal. However it turns out that despite what reviewers say, it's not all that good for that workflow despite the increased GPU power on the M1 Max. I watched that video and it seems that it just struggles with ray tracing due to the platform itself:

I would really like to have LTT or a big channel look into this. These are the results of the OctaneRender benchmark :

 

Screenshot_20211106-144020.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, IAmAndre said:

Now I'm rather confused. I wanted to recommend the M1 Pro for 3D rendering and Unreal. However it turns out that despite what reviewers say, it's not all that good for that workflow despite the increased GPU power on the M1 Max. I watched that video and it seems that it just struggles with ray tracing due to the platform itself:

The M1 Pro or the Max was never going to be the best at every single task, there's no way a GPU with the general compute performance of slightly less than a RTX 3060 (M1 Max) or similar to a GTX 1660 Ti (M1 Pro) were going to be. Similarly the Nvidia GPUs have more supported computation data types; FP32, FP16, BF16, INT32, Tensor FP16, Tensor FP16 accumulate FP32, Tensor BF16 accumulate FP32, Tensor TF32, Tensor INT8, Tensor INT4 as well as a mature RT hardware acceleration path.

 

It really should not be that much of a shock that a company that has been in the GPU business for decades with top class architectures and software tools that is offering more than 3 times the peak raw performance just in FP32 alone is faster when used correctly. It also shouldn't be a surprise that a SoC/GPU designed for professional applications is exceedingly good at that too.

 

Different things for different purposes, choice and options is great.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Similarly the Nvidia GPUs have more supported computation data types; FP32, FP16, BF16, INT32, Tensor FP16, Tensor FP16 accumulate FP32, Tensor BF16 accumulate FP32, Tensor TF32, Tensor INT8, Tensor INT4 as well as a mature RT hardware acceleration path.

For sure, apple of course have opted to do many of these things in other units of the SoC and since these all have shared RW access to the memory as long as you can re-writh you applications to make use of the NPU, AMX etc the entire SoC does provide very good support for these data types but that does require quite a re-write in your code base. 

The biggest lacking data type support on the GPU side is FP64, of course the AMX unit does provide very good FP64 support and even FP80 support.  But it would be much simpler to be able to have good FP64 scaling gpu side so you can mix that with other GPU compute without the massive eded complexity of dispatching back to the gpu to then issue accelerate commands to leverage the AMX. 

One aspect many benchmarks will not really expose is the massive amount of memory that the GPU has access to, of course this depends on what you are doing but even in raytracing support the M1 Max will outperform a desktop 3090 if (and only if) your scenes size is so large that it cant fit in the 24GB of the 3090 but can fit within the 64GB of the M1 Max.  Many of these benchmarks don't even go over 500mb of VRAM usage they are tuned to be small in vram load so that you can compare over a large number of devices and so that the download time is small and handleable.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

really should not be that much of a shock that a company that has been in the GPU business for decades with top class architectures and software tools that is offering more than 3 times the peak raw performance just in FP32 alone is faster when used correctly. It also shouldn't be a surprise that a SoC/GPU designed for professional applications is exceedingly good at that too.

That's not the point. There are many reviews out there comparing the performance of the GPU in gaming and productivity to Nvidia offerings, including 3D rendering with Blender, but there's no mention of such workflows where it trails a lot behind Nvidia GPUs. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, hishnash said:

For sure, apple of course have opted to do many of these things in other units of the SoC and since these all have shared RW access to the memory as long as you can re-writh you applications to make use of the NPU, AMX etc the entire SoC does provide very good support for these data types but that does require quite a re-write in your code base. 

The biggest lacking data type support on the GPU side is FP64, of course the AMX unit does provide very good FP64 support and even FP80 support.  But it would be much simpler to be able to have good FP64 scaling gpu side so you can mix that with other GPU compute without the massive eded complexity of dispatching back to the gpu to then issue accelerate commands to leverage the AMX. 

.... That's why I prefer when channels like LTT/TechQuickie explain stuff 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, IAmAndre said:

That's not the point. There are many reviews out there comparing the performance of the GPU in gaming and productivity to Nvidia offerings, including 3D rendering with Blender, but there's no mention of such workflows where it trails a lot behind Nvidia GPUs. 

Here I have seen a load of people benchmark blender claiming they are using the GPU but it is clear they are using a new enough version of blender to not have any GPU support on apple devices and they are not tec enough to have built the branch of 3.1 that has gpu support so all their metrics are intact cpu rendering (There is a bug in blender that lets you select gpu on macOS but then uses the cpu anyway).

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, hishnash said:

Here I have seen a load of people benchmark blender claiming they are using the GPU but it is clear they are using a new enough version of blender to not have any GPU support on apple devices and they are not tec enough to have built the branch of 3.1 that has gpu support so all their metrics are intact cpu rendering (There is a bug in blender that lets you select gpu on macOS but then uses the cpu anyway).

Yep, Blender ain't using Apple GPUs yet. Will be interesting once it does.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, IAmAndre said:

.... That's why I prefer when channels like LTT/TechQuickie explain stuff 

Videos that explain stuff take a long time, it do not expect LTT to put in the effort to really explain this stuff for these SoCs they are so fare away from what they are used to in the x86 space. How you can make the most out fo these SoCs is very very different from how you make the most out of x86 + dedicated GPU. 

For example apple have been doing hardware accreted gpu scheduling on their GPUs for 10 years already and they currently do a lot more than what is propped in the windows DX space for this.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Yep, Blender ain't using Apple GPUs yet. Will be interesting once it does.

It is not even using the AMX units (that would provide a massive perf boost for cpu tasks) its just vanilla C/C++ loops i wander if it even has SIMD/neon support or only AVX (that is not used on ARM).  The compile might be able to make up fro this i the loops are easy enough. 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, hishnash said:

For sure, apple of course have opted to do many of these things in other units of the SoC and since these all have shared RW access to the memory as long as you can re-writh you applications to make use of the NPU, AMX etc the entire SoC does provide very good support for these data types but that does require quite a re-write in your code base. 

The Apple SoC's can do a lot of these but not all, or some not at an increased throughput like you'd expect when you reduce the data type size. Similar case to older generation Nvidia archs, they had support for many of these however not all went any faster.

 

Also FP64 only exists on GA100 based products so basically A100 family. FP64 on GA102-GA106 is very slow as there is no FP64 hardware units in them at all, so you may find FP64 on M1 Pro/Max is actually faster going though CPU.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Also FP64 only exists on GA100 based products so basically A100 family. FP64 on GA102-GA106 is very slow as there is no FP64 hardware units in them at all, so you may find FP64 on M1 Pro/Max is actually faster going though CPU.

It will be faster through the cpu on the M1 Pro/Max for sure since metal does not have support for FP64 data type so you will need to use a C++ shim framework to do this (that is the bereft of it being c++14 based you can more-or-less pull off the shelf frameworks to help with these things)

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

The Apple SoC's can do a lot of these but not all, or some not at an increased throughput like you'd expect when you reduce the data type size.

The NPU can execute FP16 and Int16 (and lower) operations at a very high throughput and does double as you reduce accuracy. 

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, IAmAndre said:

That's not the point. There are many reviews out there comparing the performance of the GPU in gaming and productivity to Nvidia offerings, including 3D rendering with Blender, but there's no mention of such workflows where it trails a lot behind Nvidia GPUs. 

Well that is because you have to compare to things that run on Mac OS and Nvidia GPUs are neither supported on Mac OS either, so the comparisons you'll see are the more self interested ones as in "these are what I use so that is what I'll compare".

 

Going in to all the computational data sciences stuff sort of isn't all that relevant or even largely done on Apple devices since that entire industry is Nvidia/CUDA and you'll be using remote systems to do it from the Apple device. That might change a bit with the existence of M1 Pro/Max but not any time soon because it's largely CUDA or bust programming and tools wise.

 

Similar story for real time game rendering and development, everything is tailored to a particular system archecture type and wouldn't currently utilize or inherit many of the differences/benefits of the M1 Pro/Max. Unreal and Unity have not had the time nor the push to tailor themselves to this type of archecture.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, hishnash said:

It will be faster through the cpu on the M1 Pro/Max for sure since metal does not have support for FP64 data type so you will need to use a C++ shim framework to do this (that is the bereft of it being c++14 based you can more-or-less pull off the shelf frameworks to help with these things)

 

I mean faster on M1 Pro/Max CPU than RTX 3090 GPU for FP64 (556 GFLOPs).

 

Edit:

A100 is FP64 9.7 TFLOPs btw, for reference.

 

24 minutes ago, hishnash said:

The NPU can execute FP16 and Int16 (and lower) operations at a very high throughput and does double as you reduce accuracy. 

Yes but can it go all the way down to INT4, can it do accumulate to FP32 etc etc, that's what I mean by Nvidia supports more. The fact Apple can do some of these isn't the point.

 

Edit:

Also an RTX 3080 is 500 times faster at inferencing, A100 1000 times faster (with options up to 80GB VRAM).

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, saltycaramel said:

A must-watch account about how these came together 

Haven't finished watching it yet, literally just started, but damn that's some strong ass sharpening on the video sections with them speaking in to their cameras.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 11/3/2021 at 12:49 PM, leadeater said:

I'd really like to see it happen, even if it requires a thick (slightly) boy Mac Mini to do it. I've always like the Mini because of how small it is but actually still being good.

I just had a thought, and I wanted to get your opinion on something. If it meant that the Mac mini could become a real powerhouse of a machine with a very high end Apple Silicon SoC along with excellent cooling, would you think Apple would consider moving back to using external power supplies for the mini? 

Phobos: AMD Ryzen 7 2700, 16GB 3000MHz DDR4, ASRock B450 Steel Legend, 8GB Nvidia GeForce RTX 2070, 2GB Nvidia GeForce GT 1030, 1TB Samsung SSD 980, 450W Corsair CXM, Corsair Carbide 175R, Windows 10 Pro

 

Polaris: Intel Xeon E5-2697 v2, 32GB 1600MHz DDR3, ASRock X79 Extreme6, 12GB Nvidia GeForce RTX 3080, 6GB Nvidia GeForce GTX 1660 Ti, 1TB Crucial MX500, 750W Corsair RM750, Antec SX635, Windows 10 Pro

 

Pluto: Intel Core i7-2600, 32GB 1600MHz DDR3, ASUS P8Z68-V, 4GB XFX AMD Radeon RX 570, 8GB ASUS AMD Radeon RX 570, 1TB Samsung 860 EVO, 3TB Seagate BarraCuda, 750W EVGA BQ, Fractal Design Focus G, Windows 10 Pro for Workstations

 

York (NAS): Intel Core i5-2400, 16GB 1600MHz DDR3, HP Compaq OEM, 240GB Kingston V300 (boot), 3x2TB Seagate BarraCuda, 320W HP PSU, HP Compaq 6200 Pro, TrueNAS CORE (12.0)

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, BondiBlue said:

I just had a thought, and I wanted to get your opinion on something. If it meant that the Mac mini could become a real powerhouse of a machine with a very high end Apple Silicon SoC along with excellent cooling, would you think Apple would consider moving back to using external power supplies for the mini? 

 

Supposedly, that’s what they’re gonna do.

 

The new mini will sport a round magnetic connector for DC power like the 24” M1 iMac.

 

Wonder if it will also have two gigabit NICs (one for the gigabit port on the Mini and one for the gigabit port on the power brick) or if it won’t support that brick with ethernet. Keep in mind that the ethernet power brick on the M1 iMac is just an extension cord for the ethernet pins, the actual NIC is inside the machine. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, BondiBlue said:

I just had a thought, and I wanted to get your opinion on something. If it meant that the Mac mini could become a real powerhouse of a machine with a very high end Apple Silicon SoC along with excellent cooling, would you think Apple would consider moving back to using external power supplies for the mini? 

Can't see why they wouldn't, they did for the iMac.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×