Jump to content

October 18th Apple Event - Unleashed - Apple Silicon, MacBook Pro upgrades, HomePod mini, AirPods 3rd Generation

BondiBlue
Go to solution Solved by BondiBlue,

Summary

The Apple Unleashed event is over! Here are the new products that were announced:

  • AirPods
    • New AirPods 3rd Generation: MagSafe wireless charging, Adaptive EQ, and longer battery life
  • HomePod mini
    • In addition to Space Gray and White, HomePod mini now comes in Blue, Yellow, and Orange
  • Apple Music
    • New Voice Plan starts at $4.99/month, allows for Apple Music through Siri, including new custom playlist
  • And yes, new Macs and Apple Silicon
    • The M1 chip is now part of a lineup of three SoC designs, including the M1, M1 Pro, and M1 Max
    • The MacBook Pro has been redesigned, bringing back more ports, MagSafe charging, better battery life, and more
      • The 14" MacBook Pro starts at $1999, and the 16" starts at $2499. The 13" M1 MBP is now the base model
      • Support for up to 64GB of unified memory and 8TB of flash storage
      • M1 Pro and Max both have 10 CPU cores, and M1 Max can have up to 32 GPU cores
      • Fast charging has been added to the MacBook Pro, allowing for up to 50% charge in only 30 minutes

 

My thoughts

I'm really excited for the new MacBook Pros. I plan on upgrading to a new 16" MacBook Pro within the next couple months, and I can't wait. 

 

Sources

Apple Events

The Verge

4 minutes ago, WolframaticAlpha said:

GST on electronics is 18% and we have import duties/sanctions on china(iirc).

Hmm, taking the USD price converting to NZD and adding 15% comes to $6273.595 NZD. So like you mention this doesn't include any other import taxes or shipping costs. Extra $500 is more than I'd expect but I don't think Apple is just chucking in extra costs unwarranted.

Link to comment
Share on other sites

Link to post
Share on other sites

Hear me out here.

 

I think the 14" MacBook Pro is going to be the new meta for on-the-go photographers/videographers who need a powerful machine to handle all of their high-resolution photo/video files and still be portable. Considering that you can spec the 14" in the same way as the 16" with the M1 Max, 64GB of unified memory and up to an overkill 8TB of storage, I think it's a no-brainer for those kinds of people who can justify the cost. As a photographer myself, the 14" is extremely appealing to me for that reason, and since I am going to upgrade to a higher-resolution camera in the future and doing a lot of ultra-high-res panoramas, that sort of power in a compact package is tempting.

 

And seriously, I think people are caring way too much about the notch. Yes, it looks hella goofy, especially in the product renders. But you also need to remember;

  • In macOS, the area around the notch is specifically reserved just for the menu bar. macOS was always designed to have a persistent menu bar on the top of the screen. With the notch, they thickened that menu bar so that it is basically slightly thicker than the notch so that it wouldn't cut into content.
  • Speaking of cutting into content, all of the apps shown in fullscreen mode had the area around the notch completely blanked-out and made black, basically turning it into bezel. So it's not going to be an issue in regards to cutting into content.

They wanted slim bezels but they also wanted to stuff in a webcam with a larger image sensor along with a lens that has a faster maximum aperture, basically a webcam that doesn't suck. If they didn't want a notch but still wanted the thin bezels and the better webcam, it was either stick it on the bottom bezel (which would have the camera look up your snout) or eliminate it entirely and have it be a separate accessory (which ASUS got flak for).

 

So they had to go with the unfortunate compromise of a notch, but unlike iPhone X, they seem to be trying to make it much less of a visual hindrance than on the iPhone.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LAwLz said:

[Citation needed]

I find that claim very hard to believe since the M1 wipes the floor with the 1300X in terms of raw CPU performance.

 

M1-1300x.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

So looking at the MacAddress video from LTT I was able to extract the GPU perf graphs and yes the comparison notebooks for the M1 Max are the 105W 3080 laptop by Razer, several MSI laptops, and a Lenovo Legion laptop's iGPU.

 

For the M1 Pro the comparison notebooks were against the MSI Laptop's iGPU and the Lenovo Legion 5's 3050Ti.

 

112223762_Screenshot2021-10-19at12-31-51DamnApple.thumb.png.163129919a80de2d7475235ec8f7d237.png

 

This is the Legion 5 3050Ti comparison.

652858947_Screenshot2021-10-19at12-32-01DamnApple.thumb.png.1bf0b0d2eb41227b469085dde22040d2.png

 

This is presumably a 115-130W RTX 3080 laptop GPU.

1495657080_Screenshot2021-10-19at12-30-59DamnApple.thumb.png.28eac3bb2ff3b70b4edffcf1ce7f36c4.png

 

This is the 80-105W 3080 laptop Razer laptop comparison.

2128324508_Screenshot2021-10-19at12-31-17DamnApple.thumb.png.27d9b9461546cbdb5dd98a2a7af820e3.png

 

 

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, D13H4RD said:

Hear me out here.

not-listening-no.gif

 

20 minutes ago, D13H4RD said:

I think the 14" MacBook Pro is going to be the new meta for on-the-go photographers/videographers who need a powerful machine to handle all of their high-resolution photo/video files and still be portable.

Nah, it's not Windows so it's trash, I mean who doesn't use Windows for video and photo editing. Is that even possible on a Mac?

 

Anyway, the extra battery life on the 16" is tempting though, if you plan on keeping it for a really long time and degradation starts becoming a factor.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LAwLz said:

Just wait for the machines to end up in the hands of people like Andrei at Anandtech, and then you will get what you want.

Apple's press events are not aimed at people who want 20 minutes of graphs comparing it against the competitors in a bunch of different scenarios. Their events are aimed at the average Joe who wants something quick and easy to digest while still getting the overall picture across.

Personally I'm more interested in what Puget has to say. Also for something being marketed as "pro" and intended for professionals I would expect more information than that. The website gave me a little more insight and I'm still on the fence. Esp on the 16" where they were comparing it to last years Radeon Pro 5200m. Which is pretty much a throw away card. I get everyone's excited esp with the potential of the memory bandwidth. Like I said before I hope it competes. The rendering market need more competition, but I'm not holding my breath.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, leadeater said:

Anyway, the extra battery life on the 16" is tempting though, if you plan on keeping it for a really long time and degradation starts becoming a factor.

It should be pretty strong. The last MacBook Pro 16" had pretty good battery life for its class, so the new one should be even better, theoretically that is.

 

But I personally love the fact that they didn't gimp the 14" by much and can be specced in much the same way as its bigger counterpart. So you don't have to give up much if portability is more of a factor.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, AluminiumTech said:

-snip-

It might be likely that apple is using tasks that were accelerated by the media engine and not pure gpu. Sometime their GPU predictions meet only the ProRes transcode benchmarks. 

 

Again, I may be incorrect. But 13x claim seems a bit too cherry picked.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, BillTheThrill said:

I only watched the part specific to the Max and Pro, and I am not impressed at all. It was just a bunch of graphs with ambiguous number claims. I do want competition. I want it to do well. Really well!  But I want to see real world metrics, based on real professional workflows that span multiple software packages, file formats, and API's. Esp. when it comes to GPU claims.  As a 3D artist I'm locked into CUDA or Optix by force. So no one wants to see real competition more than me. I really want to be wrong, but it smells and sounds like a Hyposaurus Rex to me.

Sheesh, there's no way of pleasing you guys… In all fairness, Apple's first M1 performance graphs were a complete, unlabelled mess, but you can't fault them for not listening or not fixing their mistakes (much like they did with the machines themselves).

 

This time, there was nothing ambiguous about Johny Srouji's presentation or claims, and the graphs presented now had not only properly labelled axes, but also a fine print mention to the PC against which they compared their own machines (and it's not like it had to be any bigger, as anyone can pause the already published event video – by the way, they did so around 10 minutes later, as it was prerecorded! – and check it out for themselves).

 

Yeah, yeah, maybe those curves were smoothed out a bit, but those still complaining clearly don't understand the whole point of a keynote presentation. This was, by far, the best possible balance between simplification and accurateness (no, Apple will never do Intel- or LTT-style graphs during presentations, fuggedaboutit). Also, for a bit of historical context (and I've seen every single one of their presentations since the return of Steve Jobs to the company) I hadn't seen such a detailed segment from them on system performance since the infamous “Megahertz Myth” bit during the 2001 MacWorld Expo keynote. It actually felt a bit un-Apple-like, but this was one of those rare occasions where they had the goods to show for it and enough reasons to gloat about their achievements, and it really made me think that the extended graph segment was aimed squarely at the LTT fan/gamer/PC user crowd. Hence my comment.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, BillTheThrill said:

 

M1-1300x.jpg

Ah I see. The problem is that you are comparing apples vs oranges.

You can't just go to openbench and search for a processor and look at the median score. You have to also filter for which scene you want. The 1300X pretty much only has test results for the rather simple bmw27 scenes so that's what the median score is based on.

The M1 on the other hand, have far more results from various different tests, most of which are far more complex than bmw27.

 

What you need to do is filter out the specific benchmark as well, in order to get a fair apples to apples comparison.

 

The 1300X seems to complete the bmw27 scene in about 750 seconds.

The M1 seems to complete the bmw27 scene in about 350 seconds.

 

The M1 is roughly twice as fast as the AMD 1300X in Blender, which is what I'd expect.

 

 

The only other scene the 1300X has been benchmarked in is "classroom". For that scene, the 1300X gets an average finish time of 2709 seconds (although it is a very big spread).

The M1 gets a score of around 1000 in that scene, so somewhere between 2x and 3x as fast. Although I'd expect the real difference (one of those 1300X tests got a really bad score) to be closer to 2x rather than 3x.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Mainyehc said:

Sheesh, there's no way of pleasing you guys… In all fairness, Apple's first M1 performance graphs were a complete, unlabelled mess, but you can't fault them for not listening or not fixing their mistakes (much like they did with the machines themselves).

 

This time, there was nothing ambiguous about Johny Srouji's presentation or claims, and the graphs presented now had not only properly labelled axes, but also a fine print mention to the PC against which they compared their own machines (and it's not like it had to be any bigger, as anyone can pause the already published event video – by the way, they did so around 10 minutes later, as it was prerecorded! – and check it out for themselves).

 

Yeah, yeah, maybe those curves were smoothed out a bit, but those still complaining clearly don't understand the whole point of a keynote presentation. This was, by far, the best possible balance between simplification and accurateness. Also, I hadn't seen such a detailed presentation from them when it came to performance since the infamous “MHz Myth” segment. It actually felt a bit un-Apple-like, but this was one of those rare occasions where they had the goods to show for it and enough reasons to gloat about their achievements, and it really made me think that the extended graph segment was aimed squarely at the LTT fan/gamer/PC user crowd. Hence my comment.

The y axis was regarding what? What is 'relative performance'? What is performance per watt? That is kind of a major point.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm pretty interested in the M1 Max in the 16" laptop.  However, thus far, my usual 10% discount doesn't apply to it.  Heh.  😉

 

Kidding aside, I'm both interested and concerned for what this does for the follow-up Mac Pro.  Putting rumors and speculation aside because they hold no weight for me, the unified architecture is concerning.  Concerning because it starts us back-sliding towards the trashcan Mac Pro, which is what made me abandon the Mac Pro as a viable platform back in '13.  The fully modular and infinitely configurable '19 Mac Pro brought me back.  My PCI-E slots are a wee bit busy:

 

pci-e.png.96cce03f6b8ab841b6263ca321fe93c8.png

And I'd kind of hope for a similar amount of flexibility and modularity with the AS version.

 

The GPU and RAM are the killers, of course.  I'm less concerned about being able to swap the CPU, although that is handy as well.  But the former two need to be user swappable.  HAVE TO BE.  GPU tech leap-frogs CPU tech on a fairly regular cadence, and having the ability to swap one GPU out for another (I've already done it for my current Pro) is very useful in extending its life.  And RAM should be self-explanatory.  If I need more RAM later, I need to be able to add more later.  Not at system purchase time.

 

I was kicking around the whole design philosophy of the MPX modules as they apply to GPUs specifically.  As you see from the diagram, an Apple-provided MPX uses two PCI-E slots in series.  One of them is for the usual PCI-E interface with the CPU and system.  The other is to help provide additional power (no MPX modules have tertiary power cables) along with video output to the chassis' other Thunderbolt ports.  Meaning I can connect a display to any of the system's Thunderbolt ports, even the ones not on the GPU.  Pretty neat.

 

Then I thought: What if both of those were PCI-E 5?  That's a Tbit/sec of bandwidth.  Maybe that...oh... no.. wait... that's just 128GB/sec which is way slower than what the M1* is doing.  So that still wouldn't work.

 

Hm.

Editing Rig: Mac Pro 7,1

System Specs: 3.2GHz 16-core Xeon | 96GB ECC DDR4 | AMD Radeon Pro W6800X Duo | Lots of SSD and NVMe storage |

Audio: Universal Audio Apollo Thunderbolt-3 Interface |

Displays: 3 x LG 32UL950-W displays |

 

Gaming Rig: PC

System Specs:  Asus ROG Crosshair X670E Extreme | AMD 7800X3D | 64GB G.Skill Trident Z5 NEO 6000MHz RAM | NVidia 4090 FE card (OC'd) | Corsair AX1500i power supply | CaseLabs Magnum THW10 case (RIP CaseLabs ) |

Audio:  Sound Blaster AE-9 card | Mackie DL32R Mixer | Sennheiser HDV820 amp | Sennheiser HD820 phones | Rode Broadcaster mic |

Display: Asus PG32UQX 4K/144Hz displayBenQ EW3280U display

Cooling:  2 x EK 140 Revo D5 Pump/Res | EK Quantum Magnitude CPU block | EK 4090FE waterblock | AlphaCool 480mm x 60mm rad | AlphaCool 560mm x 60mm rad | 13 x Noctua 120mm fans | 8 x Noctua 140mm fans | 2 x Aquaero 6XT fan controllers |

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, maplepants said:

You are totally right about waiting for the reviews. Watching the announcements, I'm pretty excited to see reviews. The CPUs are so different and interesting that you get a wide variety of reviews. One of the most interesting M1 reviews I saw was this one comparing machine learning performance on the M1 to the then top of the line 16" MBP

Machine learning on a non-cuda gpu? The amount of tracebacks that thing will throw will be extremely annoying. Nvidia has that market almost on a lockdown. Only way I can see apple making waves is by creating and working on an open standard that is better than CUDA or by reverse engineering CUDA and then putting a wrapper.

 

 

The ML functionalities seem extremely exciting and while it won't kill off nvidia+cuda, it is still a very good competitor. Just need to see support mature for it.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, WolframaticAlpha said:

The y axis was regarding what? What is 'relative performance'? What is performance per watt? That is kind of a major point.

Relative to the peak performance of the M1. The axis is labeled from 0 to 100 with 50 in between, meaning, 0% to 100% (and above, in the subsequent graphs) of M1's peak performace.

 

You can't fault Apple for using their own heretofore flagship processor as the baseline, now, can you? That's what their customers, influencers, etc. are used to. While the complexity of the segment may low-key aim to convince non-Apple-users of how great the M1 Pro and M1 Max are, they are still going to take into consideration, first and foremost, their current users.

Captura de ecrã 2021-10-19, às 13.43.14.png

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Mainyehc said:

Relative to the peak performance of the M1. The axis is labeled from 0 to 100 with 50 in between, meaning, 0% to 100% of M1's peak performace.

 

You can't fault Apple for using their own heretofore flagship processor as the baseline, now, can you? That's what their customers, influencers, etc. are used to. While the complexity of the segment may low-key aim to convince non-Apple-users of how great the M1 Pro and M1 Max are, they are still going to take into consideration, first and foremost, their current users.

Captura de ecrã 2021-10-19, às 13.43.14.png

regarding in question to the benchmarks.

 

image.thumb.png.2dbb4342bf3791c2335edd2bfcce8329.pngwhat are these "industry standard benchmarks"? 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, WolframaticAlpha said:

regarding in question to the benchmarks.

 

image.thumb.png.2dbb4342bf3791c2335edd2bfcce8329.pngwhat are these "industry standard benchmarks"? 

I'm guessing Geekbench. AFAIK, it's the most popular cross-platform benchmarking solution currently in the market. They can still label their graphs however the hell they like after the fact (and hey, at least they finally did). It's a short video presentation still mostly aimed at Apple customers, during which they also have to cram in a lot of product announcements, not an LTT video.

 

Yes, I understand you'd rather see them point out exactly which benchmarks they used. Well, that's what LTT et al. are here for, to reproduce that testing with anything they can throw at it and check whether those claims are valid or not.

 

I'm betting that whatever they use, unoptimised as it may be for Apple Silicon, that PC will still take a beating. I mean, look at the damn thing (including that massive cooling system), take the M1 machines and extrapolate from there. That's where the whole “relative performance” schtick came from.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Mainyehc said:

I'm guessing Geekbench. AFAIK, it's the most popular cross-platform benchmarking solution currently in the market. They can still label their graphs however the hell they like after the fact (and hey, at least they finally did). It's a short video presentation still mostly aimed at Apple customers, during which they also have to cram in a lot of product announcements, not an LTT video.

Geekbench is not a very reliable benchmark. I would reserve my judgement before actual benchmarks from anandtech, gn or ltt

 

It will still be great even if apple does a minor improvement over M1. Don't consider this as me trashing them but as doubt

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, WolframaticAlpha said:

Geekbench is not a very reliable benchmark. I would reserve my judgement before actual benchmarks from anandtech, gn or ltt

Obviously, I never called that into question. But is Cinebench good enough for you, by the way? I'd also bet a kidney on them having used it as well, all with their focus on graphics and whatnot. The most likely explanation is that they ran a battery of different benchmarks and just lumped them together as “industry-standard benchmarks” (the plural is theirs, not mine). You may like it or not, but it seems a very Apple-like thing to me.

Slightly opaque, yes, but just because it is, it doesn't mean it's wrong/false/misleading. Also, they know fully well these things will be benchmarked to hell and back by a battalion of tech reviewers, so making unfounded claims would be guaranteed PR suicide. That's why I'm fully trusting their claims (but obviously awaiting independent confirmation, as one does).

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Mainyehc said:

Obviously, I never called that into question. But is Cinebench good enough for you, by the way? I'd also bet a kidney on them having used it as well, all with their focus on graphics and whatnot. The most likely explanation is that they ran a battery of different benchmarks and just lumped them together as “industry-standard benchmarks” (the plural is theirs, not mine). You may like it or not, but it seems a very Apple-like thing to me.

Until I know what the industry standard is, and which companies are using that industry standard, until then I am not going to say anything

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Mainyehc said:

Obviously, I never called that into question. But is Cinebench good enough for you, by the way? I'd also bet a kidney on them having used it as well, all with their focus on graphics and whatnot. The most likely explanation is that they ran a battery of different benchmarks and just lumped them together as “industry-standard benchmarks” (the plural is theirs, not mine). You may like it or not, but it seems a very Apple-like thing to me.

Slightly opaque, yes, but just because it is, it doesn't mean it's wrong. Also, they know fully well these things will be benchmarked to hell and back, making unfounded claims would be PR suicide. That's why I'm fully trusting their claims (but obviously awaiting independent confirmation, as one does).

Cinebench is trash. 
 

At least of you look at my kind of work. For my use case GB is closer to the way a computer uses its resources, meaning short bursty loads (mainly ST). 
 

Not that cinebench is making my M1 Mini throttle, it’s just that that kind of long sistained load is nothing like how I use my machine. 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, WolframaticAlpha said:

Until I know what the industry standard is, and which companies are using that industry standard, until then I am not going to say anything

Sure, that's your – scientifically sound – prerogative. I'm just using some basic logic here, and I'm willing to eat my own shoe if their claims don't pan out, regardless of whatever “industry-standard benchmarks” they used (there's not that many, by the way, and it really sounds like Apple-speak for “whatever most tech reviewers are using these days”, as in, they expect them to use the same solutions and to come to about the same numbers…).

 

Come to think of it, it almost feels as if they are being deliberately opaque in order to not just keep their disclaimers simpler, but also to generate buzz. This way they are basically forcing every single reviewer to buy new MacBook Pros by the boatload (like some guys from a competing YouTube channel just did yesterday) and check their bold – but vaguely specified – claims, STAT. Cunning, if you ask me. Y'all being played here… 😉

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Spindel said:

Cinebench is trash. 
 

At least of you look at my kind of work. For my use case GB is closer to the way a computer uses its resources, meaning short bursty loads (mainly ST). 
 

Not that cinebench is making my M1 Mini throttle, it’s just that that kind of long sistained load is nothing like how I use my machine. 

Please do enlighten me (I'm not questioning your claims or whatever, I just want a bit more context), but for people who do renders (i.e. sustained workloads) it's not trash, now, is it? Or is there some technical quirk/limitation that makes it so?

And… if these machines are truly the beasts they seem, maybe this could finally push some companies to port their 3D packages to the Mac? Heck, even by making use of Metal?

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Mainyehc said:

Please do enlighten me (I'm not questioning your claims or whatever, I just want a bit more context), but for people who do renders (i.e. sustained workloads) it's not trash, now, is it? Or is there some technical quirk/limitation that makes it so?

And… if these machines are truly the beasts they seem, maybe this could finally push some companies to port their 3D packages to the Mac? Heck, even by making use of Metal?

I was doing a hyperbole 🙂

 

But in general, in my opinion, sites/yt channels linger way to long on stuff like Cinebench. The market needing that kind of rendering performance is miniscule compared to the market with workloads more aking to i e GB. 
 

I my self work in engineering and mainly use standard Office apps (mostly excel) and AutoCAD. And for me the M1 from last year is more than sufficient. Still wouldn’t mind a M1 Max but I really don’t need it. 
 

But, as for me, the vast majority of people using computers a real life work load is more aking to a GB test than a Cinebench test. Meaning short bursty workloads that you just want to be snappy. 
 

I myself have my own benchmark, my excel sheet from hell. It takes 2 min to recalculate on my M1 and the computer is still responsive. Takes 7 min to recalculate on my work issued windows laptop and the computer is unusable during that time 😉

 

I wish I could share that file here but it’s work related data so a no go as a LTT general benchmark. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, WolframaticAlpha said:

Average Joe on the internet, 2021, colorized

I hate how true that is

"The most important step a man can take. It’s not the first one, is it?
It’s the next one. Always the next step, Dalinar."
–Chapter 118, Oathbringer, Stormlight Archive #3 by Brandon Sanderson

 

 

Older stuff:

Spoiler

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×