Jump to content

Apple's M1 Max Benchmarked in Adobe Premiere Pro

Lightwreather
29 minutes ago, Kisai said:

Usually popular anime is based on a manga or web-novel.

It almost exclusively is, manga and light novels. Original Anime is extremely rare, most of which are movies not TV series.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, leadeater said:

It almost exclusively is, manga and light novels. Original Anime is extremely rare, most of which are movies not TV series.

Pretty much all Isekai and tangently-similar anime from the last 5 years has originated as a web-novel, because it was trendy around 2018. That's also why they have very-long title names.

 

Tensei shitara Slime Datta Ken 2nd Season Part 2

"Slime anime" as known by westerners. "That time I got reincarnated as a Slime"

Webnovel 2013-2016. Manga 2015, Licensed in English in 2016.

Anime 2018-Present

 

I can name off a dozen other titles like this, but we're going off-topic.

 

Suffice it to say, with a few exceptions (mainly being Netflix), no other service actually airs anime in any significant, or relevant market. What does AppleTV have? As far as I know, it just integrates funimation (which acquired AnimeLab, which is what Australia/NZ got their anime from.) People basically have a choice between Netflix, "nothing" and "Sony" (which owns funimation and crunchyroll)

 

Link to comment
Share on other sites

Link to post
Share on other sites

I’ve mostly been using Netflix when looking at what Anime is around. I did sign up for a (free) crunchyroll account but didn’t really know where to start on there.

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, Kisai said:

Pretty much all Isekai and tangently-similar anime from the last 5 years has originated as a web-novel, because it was trendy around 2018. That's also why they have very-long title names.

No they are not, that's only a very recent thing and only a few like Tower of God. Slime, the anime, is based on the Manga not the web-novel. Web novels and web publications is just a new, and now sort of accepted, way of getting noticed as a new artist or getting original ideas seen. The old way was through a lot of writers competitions and other similar schemes.

 

Most manga going back a long time had a original source, kind of not much point to that. Since we're talking about anime adaptations then like I said these are mostly from manga and light novels.

 

42 minutes ago, Kisai said:

As far as I know, it just integrates funimation (which acquired AnimeLab, which is what Australia/NZ got their anime from.)

This particular situation was quite annoying. I was an original Funimation NOW user, wasn't sure at the time if I wanted that or AnimeLab. After a year or so of using it Funimation decided to no longer allow AUS/NZ access to Funimation NOW and migrated us to AnimeLab, problem was not all the content from Funimation NOW was on AnimeLab, literally things I was watching and then got cut off. Now after another few years of using AnimeLab I'm getting kicked back to Funimation NOW again, with again losing access to content. AHHHHHHHHHHHHHHhhh!! (also pay for CR)

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Forbidden Wafer said:

The question for people developing with Metal instead of Vulkan is why? Why do that? Less info, less samples, less tools, less developers, more work, etc. All of that on a platform that is not afraid of dropping their APIs out of nowhere and for no good reason and leave you behind.

Less info and less samples and less devs for sure. Tooling is not that straight forward Metal has some very good tooling in place, as good as VK or even better (at least when using apple GPUs the debug support and low level tracing ability is very extensive as apple has clearly put silicon effort in to make this possible). 

Apple is not going to drop Metal, and when apple does drop apis they tend to git 7 to 10 years notice and if they were going to drop metal it would likely be more like 20 years notice since the entire OS depends on it from boot screen onwards. 

Link to comment
Share on other sites

Link to post
Share on other sites

https://www.anandtech.com/show/17024/apple-m1-max-performance-review/6

So if anyone needed a confirmation of the M1 Max being near enough to RTX 3060 performance, here you go.

GFXBench 5.0 Aztec Ruins High 1440p Offscreen

Cool... but...

Shadow of the Tomb Raider - Enthusiast

The caveat has to be said that these games aren't native Apple Silicon on MacOS because they were released in 2019. https://applesilicongames.com/games/FDG34wr6pF7NiRwmzQFMxT/shadow-of-the-tomb-raider 

 

So presumably, a native port to Apple Silicon would perform at least as good as having an RTX 3060 desktop model.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Kisai said:

The caveat has to be said that these games aren't native Apple Silicon on MacOS because they were released in 2019. https://applesilicongames.com/games/FDG34wr6pF7NiRwmzQFMxT/shadow-of-the-tomb-raider 

 

So presumably, a native port to Apple Silicon would perform at least as good as having an RTX 3060 desktop model.

 

 

It's interesting to see how well Apple silicon performs even when the CPU is limited by translation processes. I do wonder what kinds of architecture differences allow the Apple GPU to be so great at certain professional workloads while being significantly behind in games compared to similar AMD/Nvidia GPUs when not CPU limited. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/22/2021 at 10:50 AM, SignatureSigner said:

Seems like apple it breathing new life into the macs. Magsafe, good preformance, decent keyboard, great trackpad, whats next up-gradable ram?

ha ha lol never -apple

I hope rgb keyboards 😉

https://www.lttstore.com/

1990 M3s are the best looking things ever made.    

^This statement has been retracted^
2020/2021 BMW S1000RRs/Ninja H2s are the best looking things ever made. 

Don't ask to ask. 
If you want me to see the reply, @XGoodGuyFitz(aka me) and/or quote me.
Thanks!

 

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, thechinchinsong said:

I do wonder what kinds of architecture differences allow the Apple GPU to be so great at certain professional workloads while being significantly behind in games compared to similar AMD/Nvidia GPUs when not CPU limited. 

Poor implementation for Metal is likely to be a part of it. I forget which game it is right now but there’s one game which does perform very well written in Vulkan and using MoltenVK that the developers originally struggled with, and somehow ended up having Apple’s Metal development team helping them to fix it. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, thechinchinsong said:

It's interesting to see how well Apple silicon performs even when the CPU is limited by translation processes. I do wonder what kinds of architecture differences allow the Apple GPU to be so great at certain professional workloads while being significantly behind in games compared to similar AMD/Nvidia GPUs when not CPU limited. 

It has hardware encoders and decoders for specific formats as well has having really good bandwidth to latency characteristics and high amounts of cache. It's not just on thing. However you can see what happens when you start to fall outside the design targets of the SoC, just look at RED video format, performance is way worse than it is for H.264 and ProRes.

 

Raw theoretical computation is about that of an RTX 3060 which is why you see it perform more around this level for more general compute applications and of course games. When you get the workload balance right with good optimization as well as utilize hardware encode/decode (for video editing at least) you get very good overall utilization of the hardware which translates to high application performance.

 

Nvidia and AMD GPUs don't have very good compute performance to application performance ratio. For example in Blender for the BMW scene often seen in benchmarks an AMD 3990X CPU (1.57 TFLOPs) will complete the render in ~35 seconds where as an RTX 3090 (35.6 TFLOPs) with OptiX will do the same scene in 10 seconds, that's only 3 times faster for nearly 23 times the compute power.

 

For the M1 Pro/Max with Blender on the same BMW scene the CPU render time is 3 minutes 21 seconds, however the PCMag CPU/GPU results seem off compared to others. I don't know what exactly they are doing but you can at least compare only within their test.

 

Also note it won't be until Blender 3.1 that more proper optimization for Apple will be in Blender

 

image.png.4f59a77a974c45d5900c2e04360d26c2.png

 

image.png.4cdbbd9b7a103c9d9235f43ef9b982a4.png

Link to comment
Share on other sites

Link to post
Share on other sites

A question I have...and maybe it'd be better served as a seperate thread...or video... is ...what is VULCAN, METAL, DX12 etc......like I know they're graphic things...but why are there different ones...what do they do different.  Is one better than the other? If so, why doesn't everyone just use the better one? If METAL is causing so many issues, why does Apple use it and not something else?

🖥️ Motherboard: MSI A320M PRO-VH PLUS  ** Processor: AMD Ryzen 2600 3.4 GHz ** Video Card: Nvidia GeForce 1070 TI 8GB Zotac 1070ti 🖥️
🖥️ Memory: 32GB DDR4 2400  ** Power Supply: 650 Watts Power Supply Thermaltake +80 Bronze Thermaltake PSU 🖥️

🍎 2012 iMac i7 27";  2007 MBP 2.2 GHZ; Power Mac G5 Dual 2GHZ; B&W G3; Quadra 650; Mac SE 🍎

🍎 iPad Air2; iPhone SE 2020; iPhone 5s; AppleTV 4k 🍎

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Video Beagle said:

If METAL is causing so many issues, why does Apple use it and not something else?

The easy answer is "because it's Apple and they want to control everything"

 

One reason might be (and I don't know for sure whether this is true) is that the Metal API allows Apple to more tightly integrate with their thread scheduler (Grand Central Dispatch) so that better decisions can be made around which jobs need to be scheduled on to relevant threads with relevant performance priority.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Video Beagle said:

A question I have...and maybe it'd be better served as a seperate thread...or video... is ...what is VULCAN, METAL, DX12 etc......like I know they're graphic things...but why are there different ones...what do they do different.  Is one better than the other? If so, why doesn't everyone just use the better one? If METAL is causing so many issues, why does Apple use it and not something else?

Let me try to provide a quick summary:

OpenGL, OpenCL, VULCAN, METAL, DX12, CUDA are apis (and programming languages) that let developers access the compute and/or 3d display functions exposed by GPUs.  

CUDA is a compute only api that is proprietary to Nvidia when writing code that runs on the GPU you do this in a subset of C++. CUDA is not used for display of 3d assets (such as games).

OpenGL is a rasterisation focused display api that provides high level abstractions, it can be used for some compute tasks but due to the high level abstraction is not very good at getting everything out of your hardware however therefore is much more portable. (it has its own shading lanague that devs need to use) (this is an open Standard that any platform/gpu vendor can support if they want)

OpenCL this is a compute focused api build onto of the C programming linage, it is an open standard that anyone can use.

DX1 through to 11 is like openGL in that it is a high level rasterisation focused api, again with its own language (however it is proprietary and owned by Microsoft)


DX12 is a lower level  rasterisation focused api, this lets devs get more out of their hardware than using older DX11/10 etc or OpenGL ( proprietary and owned by Microsoft)

VULCAN this is a lower level rasterisation focused api that is like openGL an open standard that anyone can use.

Metal this is a low level compute and display focused API, think of CUDA + Vulkan, like CUDA it uses a variation of C++, it is owned by apple and thus only available on apple devices.


So on-to your question of why does apple use Metal and not another option.

When firstly the fact that metal is a mixed compute and rasterisation api is quite differnt while you can do compute in VULCAN metal does a much better job at this exposing much more of what you would expect in a compute situation such as freedom to play with memory pointers etc, and being based on C++ is much simpler for developer to port their C/C++ (openCL/CUDA) project to metal that it would be to port them to VULCAN. In fact if you look at the branch of Blender that is adding Metal support almost all of the code changes are in the glue code that talks to the GPU  (sending data, getting data back etc). The code that runs on the GPU needed almost no changes to make it work as a Metal compute shader on the GPU. 

But this is not the entire story the rest of it comes down to being able to shape the API to match the hardware and match this to what is optimal to do in hardware to improve perf/W.  Since Vulkan is a designed by commity api with every GPU vendor pushing for their little bits it ends up being a least common divisor of all. This would effectively force apple (who would not have that much control) to build gpus that are very much the same as AMD, NV, intel etc not only is this hard from a patent perspective but it is also bad from a perf/W perspective.  Some of the stuff apple have been doing deeply builds on this ability to have their own api that matches the hardware functions much better. 

This is very close to what soney do with the playstation, even through they are using RDNA2 gpus they are not using off the shelf VULCAN but rather their own api that exposed every little last bit of perfomance from the hardware but would never be acceptable in the mainstream vulcan as it would be impossible for intel/nv to support these things. 

 



 

 

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Video Beagle said:

A question I have...and maybe it'd be better served as a seperate thread...or video... is ...what is VULCAN, METAL, DX12 etc......like I know they're graphic things...but why are there different ones...what do they do different.  Is one better than the other? If so, why doesn't everyone just use the better one? If METAL is causing so many issues, why does Apple use it and not something else?

It's mostly a hardware featureset thing.

 

Prior to DirectX, every GPU vendor had their own proprietary libraries that all did only certain things. Such as Glide for 3DFX and S3 Virge's

 

Basically all these first generation GPU's didn't do DirectX or not very well, requiring DirectX to emulate a lot things in software. The first "actual" 3D API was OpenGL, always has (was developed by then SGI) and it's successor Kronos.

 

OpenGL and DirectX both started out with immediate mode API's.

 

You know what the real kicker is? OpenGL's predecessor IRIS GL was more DirectX than DirectX. As it had windowing and input support. Likewise When Microsoft adopted it first for Windows NT4, Windows 95 did not have it, and it wasn't until Windows 98 that there was OpenGL support, which corresponds to Windows 2000 (NT5).

 

Now, this where things get kinda "overtly stupid" 

 

DirectX has been horribly single-threaded, all the way to DirectX9, which most games were still developed under until Windows 10. Because of the holdouts of people using Windows XP and 7. Likewise DX8, released in 2000 is when shaders were introduced. You might have heard the term "hardware transform and lighting", that is what this is.

 

OpenGL didn't get programable shaders until 1.4, 2 years later. That's how OpenGL lost the lead.

 

Vulkan succeeds OpenGL, which is why depreciating it is also a problem. If an OS removes the library and doesn't provide that functionality through another means (eg OpenGL on top of Vulkan) , then all those OpenGL based software and games immediately become unusable. So on MacOS this is a problem as before Metal, there was only OpenGL. This is one of the many reasons why games never get developed for MacOS, because Apple had no equivalent to DirectX, and with many games, they only have DX backends, not OpenGL, beacause OpenGL and DX are not feature parity.

 

Metal changes that on OSX, A Vulkan layer can literately be written on top of Metal, because Metal exposes far more of the GPU than OpenGL ever did. This is the same with Vulkan on Windows. Now for once, a game developed against Vulkan can work exactly the same, at least with the GPU on all platforms. DX12 though, again, a developer could just develop their game for Vulkan and DX12, (there are games built this way, and it's seeming more common than expected.)

 

However, we do need to point at the elephant in the room.

 

ANGLE, or the "backend" for Chromium, Firefox, and various other projects that use Chromium as a webview. ANGLE is basically "WebGL on top of OpenGL ES, on top of OpenGL/DirectX/Vulkan/Metal"

 

The problem with ANGLE is that it risks perpetuating "least common denominator" feature set for software using it. If you've seen the amount of "bugs/workarounds" in it (take a peek in your preferred browser, eg chrome://gpu/ ) you'd realize that that using ANGLE outside of WebGL/OpenGLES mobile webgame ports is a serious mistake, as there is the potential for any future bug/hardware to make the game unusable.

 

If you're developing a game using the native API's on an OS, and you want portability, the only option is Vulkan, as both mobile and desktop OS's support it, or can be adapted (MoltenVK), but this still leaves the audio processing and input completely absent. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I know Apple stopped updating their OpenGL a bunch of years ago...the REALITY render engine uses it and the developer, who had been primarily a mac developer in the past, was looking at abandoning the Mac side, but that was a bunch of years ago and I lost touch and I don't know what eventually happened

 

🖥️ Motherboard: MSI A320M PRO-VH PLUS  ** Processor: AMD Ryzen 2600 3.4 GHz ** Video Card: Nvidia GeForce 1070 TI 8GB Zotac 1070ti 🖥️
🖥️ Memory: 32GB DDR4 2400  ** Power Supply: 650 Watts Power Supply Thermaltake +80 Bronze Thermaltake PSU 🖥️

🍎 2012 iMac i7 27";  2007 MBP 2.2 GHZ; Power Mac G5 Dual 2GHZ; B&W G3; Quadra 650; Mac SE 🍎

🍎 iPad Air2; iPhone SE 2020; iPhone 5s; AppleTV 4k 🍎

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/22/2021 at 6:12 PM, leadeater said:

The difference between Live Playback and Export is a little disappointing, wonder if that has to do with long term power limits or total package power limiting the combined CPU and GPU performance in a full workload utilization of both.

I think it's more likely that playback can make use of some dedicated hardware whereas rendering is a more generic workload that relies on raw power.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, Kisai said:

and you want portability, the only option is Vulkan, as both mobile and desktop OS's support it, or can be adapted (MoltenVK),

You still pay quite a large performance hit and least common feature set penalty by doing this.  In particular when it comes to running on apples (powerVR) inspired GPUs that expose a completely different class of optimisations if developers embrace them.

Supporting metal is not that hard for any given developer the real issue is the market size and getting talent with experience to really optimise it well.  Both of these are somewhat chicken and egg issues that other vendors (consoles who have even more strange apis) solve with $$$ to developers and closed market economy that has explicit price fixing to ensure game prices stay high. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, hishnash said:

You still pay quite a large performance hit and least common feature set penalty by doing this.  In particular when it comes to running on apples (powerVR) inspired GPUs that expose a completely different class of optimisations if developers embrace them.

Supporting metal is not that hard for any given developer the real issue is the market size and getting talent with experience to really optimise it well.  Both of these are somewhat chicken and egg issues that other vendors (consoles who have even more strange apis) solve with $$$ to developers and closed market economy that has explicit price fixing to ensure game prices stay high. 

My assumption is that the console API's are basically just Vulkan(PS4/PS5/Switch) or DX11/DX12 (Xbox) anyway, and since the Switch is a lot closer to Android hardware, probably supports OpenGL ES in hardware as well. The PS4/PS5 and Xbox's use AMD GPU hardware, and it's unlikely that AMD developed 5 separate drivers for these otherwise identical-between-generations GPU hardware. So just like with Apple using Intel/AMD GPU hardware, it's likely AMD only wrote a reference driver and the console vendors/Apple write their API's directly on top of the reference driver (Mantle-which-became-Vulkan).

 

https://web.archive.org/web/20160513034700/http://develop.scee.net/wp-content/uploads/2014/11/ParisGC2013Final.pdf

 

Interesting point of reference, the PS4 has 176GB/sec of memory bandwidth. That's a bit less than the M1 Pro. Jump down to page 32 to see the language you see with Vulkan and Metal now.

 

At any rate, game dev's do not like having to build multiple backends, because that costs development time, and inevitably results in delays. That's why many off-the-shelf game engines just charge you to use the PS or Xbox or Nintendo SDK's because those parts are modular. That doesn't mean you can just ship the same binary, but it does mean that you will be forbidden from doing certain things on certain platforms, and perhaps not touch the GPU at all, with the platform transcoding the shaders directly to whatever the underlying hardware expects (unlike PC games today that have to compile the shaders upon first launch.)

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Kisai said:

My assumption is that the console API's are basically just Vulkan(PS4/PS5/Switch)

Nope for PS4 and PS5, this is Sony, they do w/e the hell they want lol. Prior PS consoles used a Sony one that was based off OpenGL ES.

 

Sony's GNM/GNMX pre-dates Vulkan by 4 years. It even pre-dates AMD's Mantle by 1 year, the base foundation of Vulkan

 

Quote

The PlayStation 4 features two graphics APIs, a low-level API named Gnm and a high-level API named Gnmx. Most developers start with Gnmx, which wraps around Gnm, which in turn manages the more esoteric GPU details. This can be a familiar way to work if the developers are used to platforms like Direct3D 11.

 

Another key area of the game is its programmable pixel shaders.[8] Sony's own PlayStation Shader Language (PSSL) was introduced to the PlayStation 4.[9] It has been suggested that the PlayStation Shader Language is very similar to the HLSL standard in DirectX 11, with just subtle differences that could be eliminated for the most part through preprocessor macros.[8]

 

https://ubm-twvideo01.s3.amazonaws.com/o1/vault/gdceurope2013/Presentations/825424RichardStenson.pdf

Link to comment
Share on other sites

Link to post
Share on other sites

Interesting thread on the M1 GPU actually being taken advantage of: https://twitter.com/andysomerfield/status/1451859111843356676

 

 

Quote

In Photo, an ideal GPU would do three different things well: 1.) High compute performance 2.) Fast on-chip bandwidth 3.) Fast transfer on and off the GPU.

Way back in 2009, no GPU did all three things well - but we thought that eventually the industry would get there, so we took a risk and designed the entire architecture based on that assumption. Things didn’t go entirely to plan.

We shipped Photo in 2015 - six years after the design phase - without GPU compute support 😞

A GPU which did all the things we needed simply didn’t exist. We wondered if we had backed the wrong horse. Happily, a short while later it did exist - but it was in an iPad 😬!

…..

The #M1Max is the fastest GPU we have ever measured in the @affinitybyserif Photo benchmark. It outperforms the W6900X — a $6000, 300W desktop part — because it has immense compute performance, immense on-chip bandwidth and immediate transfer of data on and off the GPU (UMA).

 

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, Obioban said:

Interesting thread on the M1 GPU actually being taken advantage of: https://twitter.com/andysomerfield/status/1451859111843356676

Yeah. I saw that and I'm now seriously considering looking at switching to their software to replace using Lightroom for my photography. I used to use Aperture, then Apple decided not to do that anymore and Photos.app doesn't do what I want, so I ultimately, reluctantly, ended up paying for a monthly creative cloud subscription.

 

A buy once piece of software to do post processing of my photos is exactly what I want, and a software firm that's prepared to publicly talk about their software's design in that way is just the icing on the cake. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/26/2021 at 12:17 AM, Paul Thexton said:

Poor implementation for Metal is likely to be a part of it. I forget which game it is right now but there’s one game which does perform very well written in Vulkan and using MoltenVK that the developers originally struggled with, and somehow ended up having Apple’s Metal development team helping them to fix it. 

The game I was thinking of was Metro Exodus. If anybody wants to geek out about API usage they discussed it in one of this year's WWDC videos https://developer.apple.com/videos/play/wwdc2021/10148/

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/27/2021 at 6:22 PM, Paul Thexton said:

Yeah. I saw that and I'm now seriously considering looking at switching to their software to replace using Lightroom for my photography. I used to use Aperture, then Apple decided not to do that anymore and Photos.app doesn't do what I want, so I ultimately, reluctantly, ended up paying for a monthly creative cloud subscription.

 

A buy once piece of software to do post processing of my photos is exactly what I want, and a software firm that's prepared to publicly talk about their software's design in that way is just the icing on the cake. 

If you are a Mac user do the switch to Affinity. Even for me that do limited photo editing on my spare time it is a no brainer, specially since it (currently at least) is a one time purchase and not a subscription model. And for around 600 SEK (incl VAT), thats about $70 US it won’t break your bank account.

 

Hardest part initially for me is to find the different functions I want to use, but thats just part of switching software and you’ll figure it out fairly quick.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×