Jump to content

Geekbench running on Rosetta 2 in Apple's DTK outperforms the Surface Pro X with native ARM64 Geekbench

captain_to_fire
43 minutes ago, justpoet said:

Even most AAA games don't push the CPUs much unless you artificially drop resolution and visual quality to try and take the GPU out of the speed equation.

They do, just current PC's tend to have 4 cores and 8 threads, so you see them top out at 12.5% because they're very single-threaded. 

 

Like the performance under Rosetta2 is probably fine since it has the same single-core performance as the 2012 Macmini, so it would still come out ahead if the GPU performance scales better since games should be using Metal either way. So if you're running something that is not going to be recompiled for OSX ARM, it's probably going to be something built in versions of Unity before 2020.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, justpoet said:

Even most AAA games don't push the CPUs much unless you artificially drop resolution and visual quality to try and take the GPU out of the speed equation.

Ehh, my SB2 is almost always CPU bound in gaming like MHW for example. (Desktops not as common, but lowering 40% single thread is huge. That's back to using a Bulldozer cpu).

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Curufinwe_wins said:

Ehh, my SB2 is almost always CPU bound in gaming like MHW for example. (Desktops not as common, but lowering 40% single thread is huge. That's back to using a Bulldozer cpu).

You're mistaking where the performance penalty percentage comes off of though.  The 40% is off of native Apple Silicon running speeds when running under Rosetta 2, not off of current x86 running speeds.

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, justpoet said:

You're mistaking where the performance penalty percentage comes off of though.  The 40% is off of native Apple Silicon running speeds when running under Rosetta 2, not off of current x86 running speeds.

And native apple silicon isn't actually faster than current x86 platforms... so it's still pretty much fair. 

 

Also thats x86, not x86-64 which WILL have a much higher performance penalty. (At least until SVE2 comes around, and likely even then).

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Curufinwe_wins said:

And native apple silicon isn't actually faster than current x86 platforms... so it's still pretty much fair. 

 

Also thats x86, not x86-64 which WILL have a much higher performance penalty. (At least until SVE2 comes around, and likely even then).

What native Apple Silicon are you using to compare to current x86…the dev kit which is on Apple Silicon from 2018, not their current generation of Apple Silicon, not using desktop power/heat budgets, not using likely desktop core counts, and isn't what the Macs will be based on?  You really can't make that comparison to then claim that new stuff will be 40% behind in performance.

 

It is easy to KNOW that Apple Silicon performance in released Macs will be BETTER than what the dev kit performance is (their current generation phone silicon already is almost 20% faster in single thread performance…in a phone).  The real question will be how much.  If leaked early geekbench scores are to be believed (too early to be finalized I think), the A14 in the next phone will be over 45% faster in single thread workloads than the A12z being used in the dev kit.

https://www.gizchina.com/2020/03/15/iphone-12-apple-a14-bionic-chip-first-benchmark-scores-appears-online-beats-sd-865/

So…really, comparing anything to the Dev Kit and saying Apple Silicon Macs will be too slow as a result…is kinda ridiculous.

 

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, gabrielcarvfer said:

If AMD didn't change everything, that certainly would be the case. As things are right now, I would not count on that. Performance per watt is pretty much the same.

AMD chips have always been a little too hot for laptops which is probably why Apple never used them since the same applies to the MacMini and iMac. It would be kinda cool to see a ThreadRipper Mac Pro, but I don't think we're going to see an AMD chip, let alone an ARM chip in the Mac Pro anytime soon. 

 

Also Rosetta2 isn't likely to last three OS cycles, which is what happened with Rosetta1 (10.4, 10.5 and optional in 10.6.) Since Apple has the store as a reference point for what has migrated to ARM, Apple can decide to pull Rosetta2 when enough developers have ARM binaries.

 

Or Developers might show resistance to this and it might stick around longer, much like the 8 years between 64-bit support being announced and 32-bit support being pulled.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, justpoet said:

(their current generation phone silicon already is almost 20% faster in single thread performance…in a phone

GeekBench is a pretty atrocious tool for getting to know how well a processor will actually perform though. Until I see some real side by sides of applications I'm not even going to bother to guess how it's going to perform or compared to current x86.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Kisai said:

AMD chips have always been a little too hot for laptops which is probably why Apple never used them since the same applies to the MacMini and iMac

Not anymore with the current Zen2 mobile processors, but if Apple were to use them they would be coming to their products start of next year and they are already down the ARM path already so it's a late comer issue too. Apple is still going to refresh products with Intel but those could have just as easily been AMD if the timing worked out better. Even on the 15W U series they are getting shockingly close to desktop levels of performance at actually low power.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, leadeater said:

GeekBench is a pretty atrocious tool for getting to know how well a processor will actually perform though. Until I see some real side by sides of applications I'm not even going to bother to guess how it's going to perform or compared to current x86.

I hope Cinebench R20 gets recompiled for Universal Binary 2. I know that devs who received a DTK decided to run benchmarks first instead of recompiling their applications, I wonder why they’ve decided to run Geekbench but not Cinebench R20 since both will just run under Rosetta 2. 

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, captain_to_fire said:

I hope Cinebench R20 gets recompiled for Universal Binary 2. I know that devs who received a DTK decided to run benchmarks first instead of recompiling their applications, I wonder why they’ve decided to run Geekbench but not Cinebench R20 since both will just run under Rosetta 2. 

Personally I'd rather see native vs native, even if it's not the same application. For example Final Cut Pro vs Adobe doing the same things, even then when it comes to Final Cut Pro you can compare directly. It's that native application performance I'd like to see.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, leadeater said:

Apple is still going to refresh products with Intel but those could have just as easily been AMD if the timing worked out better.

The chances of Apple using AMD could’ve been possible if only the Zen architecture came earlier. Back when ultrabooks are just starting to get popular (2010-2012), Intel was better than AMD that’s why they’ve used Intel’s Core 2, core i, and core M. 

4 minutes ago, leadeater said:

Personally I'd rather see native vs native, even if it's not the same application. For example Final Cut Pro vs Adobe doing the same things, even then when it comes to Final Cut Pro you can compare directly. It's that native application performance I'd like to see.

Well it seems that Adobe might bring the entire CC to Macs with Apple Silicon. Wondering how Premiere will perform considering it’s not a Mac optimized program. 

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, captain_to_fire said:

The chances of Apple using AMD could’ve been possible if only the Zen architecture came earlier. Back when ultrabooks are just starting to get popular (2010-2012), Intel was better than AMD that’s why they’ve used Intel’s Core 2, core i, and core M. 

Don't see how that would have really made any difference. Zen2 mobile 12 months ago would have made it an option for the refreshes coming up now. Switching from Intel to AMD isn't that much of a big deal. It's not like Apple is sticking with Intel for any other reason than being the only viable and best option at the time of product design, nobody was expecting Zen2 mobile to be as good as it is so no reason to even consider it until after it proved it was as good as it is. Same reason applies to ARM, no reason to consider it until it shows it has the performance and power efficiency to even bother, only difference is Apple is actively using that in it's mobile/tablet products so it's not radar blind.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/2/2020 at 3:31 AM, Jotoco said:

Intel got what it deserved for slacking.

 

Now do Qualcomm. 

 

Hope now more R&D goes into other arm platforms so there is actually competition. 

What you have is an oligopoly making billions using silent price fixing unless people in charge start to act like in the good old days of consumerism. There will be no real competition if you use patents to keep all competetion out. CPU:s and GPU:s need the same kind of patent licensing that killed Nokia and Ericsson. Too bad the USA based giants will never be treated in same way.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, gabrielcarvfer said:

The question is: why would you buy a Mac with ARM if the AMD alternative has same performance/watt and is way more flexible? Just doesn't make sense.

 

You can't even legitimately virtualize Mac OS to develop stuff on other platforms, and I can guarantee most devs won't buy a Mac just to please a small audience. That is so obvious that they're now pushing phone/tablet apps to the desktop.

Do we know for sure the performance/watt comparison when both chips run their native binaries?

 

The sneak peeks of a couple actual apps running in Rosetta (Maya and Shadow of the Tomb Raider) were both quite impressive to see, and provides confidence that performance won’t be an issue going forward, especially if Apple implements vector instructions in their first consumer iterations. But we won’t get to see the results for some time longer. 
 

3 hours ago, leadeater said:

GeekBench is a pretty atrocious tool for getting to know how well a processor will actually perform though. Until I see some real side by sides of applications I'm not even going to bother to guess how it's going to perform or compared to current x86.

I’d probably give these comparisons a tad more credibility given that the Apple silicon is also running the x86 Mac version to compare against. Literally identical code. Though if Geekbench doesn’t use AVX, then I would agree that it isn’t a great comparison as it isn’t using the x86 CPU to it’s full potential. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Zodiark1593 said:

I’d probably give these comparisons a tad more credibility given that the Apple silicon is also running the x86 Mac version to compare against. Literally identical code. Though if Geekbench doesn’t use AVX, then I would agree that it isn’t a great comparison as it isn’t using the x86 CPU to it’s full potential. 

Geekbench's main trademark is that it's calibrated against a known CPU. So having a 1000 score means that it performs at least at the level of that CPU.

 

However if you were to actually look at the drill-downs per CPU, you'll probably find what Rosetta2 isn't good at and marked down on. What I hope Geekbench does, at least for benchmarks is release two versions for OSX, one for Intel and one for ARM, so that we don't get blindsided with tampered benchmarks due to small changes in Rosetta2.

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/30/2020 at 7:54 PM, DrMacintosh said:

Someone at Apple allegedly said, "Look what we can accomplish without even trying"

 

So just think about what Apple can do if they increase the power budget. 

This. Full desktop power optimized ARM cpus are the future of computing. If performance lives up I could actually see myself buying apple hardware, but hopefully someone else gets on the hardware game before it comes to that *cough* AMD *cough* ARM Threadripper *cough*

 

(And hopefully the transition will lead to increased open source adoption as FOSS will get rebuilt the fastest :) )

Resident Mozilla Shill.   Typed on my Ortholinear JJ40 custom keyboard
               __     I am the ASCIIDino.
              / _)
     _.----._/ /      If you can see me you 
    /         /       must put me in your 
 __/ (  | (  |        signature for 24 hours.
/__.-'|_|--|_|        
Link to comment
Share on other sites

Link to post
Share on other sites

On 7/1/2020 at 6:46 AM, captain_to_fire said:

That’s why I think Microsoft should just do it themselves by obtaining an ARM license just like Apple, Samsung and Qualcomm. 

Way too expensive and difficult.

 

On 7/1/2020 at 6:49 AM, Nowak said:

Honestly? Yeah.

 

I wish Qualcomm didn't have such a stranglehold on the mobile ARM SoC market because their SoCs are actual trash.

It's a bit more complicated than that.

Qualcomm makes really good chips. The problem right now is that they have become lazy and just use stock (well, very slightly modified) ARM cores, and ARM traditionally hasn't bothered with high performance designs.

That is about to change though.

 

For those interested, ARM has CPU core designs that you can buy and implement into your own SoCs. They also allow people to modify them. Before, Qualcomm just bought a license for the instructions and then went "we can design a CPU architecture that supports these instruction calls", and so they did, from the ground up. Then around the shift to 64 bit cores they went "you know what, the stock cores ARM sells are about as good as our in-house ones, so let's just use those and save money on CPU architects".

Apple still only buys the license for the instruction set and design their CPUs completely in-house, not relying on ARM's stock designs.

 

The problem with using stock ARM cores is that ARM has traditionally not focused on performance. They have always tried to balance performance, power and area (PPA). That is to say, whenever ARM has made a decision for their CPU cores, they have always looked at "does this increase performance? Does this increase power? Does this make the chip bigger?" and then factor in all those things when decision if a chance is worth implementing or not. This is because smartphones is not ARM's only business. The CPU cores used in our phones are also used in lots and lots of other stuff, such as industrial equipment and the likes. For those things, size (and thus cost) might be far more important than performance. As a result, ARM has left a lot of performance improving things on the drawing board just because they don't want their core designs to be too large (die size wise) or too power hungry.

A few months ago ARM announced the "Cortex-X1" design, which is their first core where they go "okay, area and power efficiency aren't as important. Performance is our goal". Current estimates puts it at around Apple A13 tier performance.

 

In order for Microsoft to actually be able to compete with Apple in terms of performance, they would have had to buy a license for the ARM instruction set, and then start designing cores from the ground up. Not even Samsung, which has waaaaay more experience than Microsoft at building SoCs, could compete with the stock ARM cores. Qualcomm gave up because they couldn't compete either.

I don't think Microsoft could do what Apple has done. It's just way too big of an undertaking, and would take many many years of development.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/1/2020 at 6:20 AM, Zodiark1593 said:

Emulated x86 performance appears to be quite similar to ~3.0 GHz Haswell territory. Yikes!

On 7/1/2020 at 8:31 AM, Kisai said:

I posted in the last thread in this subject what the geekbench benchmarks were for cpu's that people actually have, and "under Rosetta2" scores, actually puts this closer to a Sandy Bridge CPU (MacMini 2012) which means it's not that unreasonable.

Another way of looking at this is that when running x86 in "emulation mode", the A12Z at 3Ghz gets ~90% of the performance of a Zen 1 CPU running at 3,4GHz (the 1700X gets around 950 single core score).

In other words, the A12Z gets better clock for clock performance than AMD's Zen 1 architecture, even when running translated x86 code. At least in Geekbench 5 which I think will translate at least decently to everyday usage as well. That's fucking insane.

 

 

The processor that actually gets released in the Mac computer will probably beat both Intel and AMD in terms of raw performance, and absolutely crush them in efficiency. At least if we're looking at laptop and lower-end mainstream desktop CPU parts (R3 and maybe R5).

 

 

  

5 hours ago, leadeater said:

GeekBench is a pretty atrocious tool for getting to know how well a processor will actually perform though. Until I see some real side by sides of applications I'm not even going to bother to guess how it's going to perform or compared to current x86.

Geekbench 5 is much better than Geekbench 4 in that regard. Also, the results fairly closely mirrors SPEC, so I am willing to believe that this is actually very applicable to real world performance.

 

 

  

5 hours ago, captain_to_fire said:

I hope Cinebench R20 gets recompiled for Universal Binary 2. I know that devs who received a DTK decided to run benchmarks first instead of recompiling their applications, I wonder why they’ve decided to run Geekbench but not Cinebench R20 since both will just run under Rosetta 2. 

Cinebench will not run well on Rosetta 2.

It relies very heavily on AVX instructions, which I don't think Rosetta 2 can translate nicely to ARM instructions.

 

 

 

 

 

  

5 hours ago, leadeater said:

Personally I'd rather see native vs native, even if it's not the same application. For example Final Cut Pro vs Adobe doing the same things, even then when it comes to Final Cut Pro you can compare directly. It's that native application performance I'd like to see.

We already have native vs native SPEC results.

 

 

SPEC integer (single-threaded)

A13 (Lighting, 2.66GHz) - 52.82

Intel 9900K (Skylake, 5GHz) - 54.28

Ryzen 3900X (Zen 2, 4.6GHz) - 49.02

 

Spec floating point (single-threaded)

A13 (Lighting, 2.66GHz) - 65.27

Intel 9900K (Skylake, 5GHz) - 75.15

Ryzen 3900X (Zen 2, 4.6GHz) - 73.66

 

 

The A13 is on-par with Zen and Skylake for single core performance, even when running at way lower frequency. The IPC of Apple's ARM cores is much better than anything AMD or Intel has to offer.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LAwLz said:

Snippity 

Apple’s A13 chip has support for custom instructions called AMX. From what I’d read, it’s for accelerating machine learning, though from what I understand, AVX is also used for this. Is AMX sort of Apple’s take on SVE/AVX, or is it something different entirely? Based on what I know, I’ve no idea. Just some guesses. 
 

https://www.realworldtech.com/forum/?threadid=187087&curpostid=187092

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Zodiark1593 said:

Apple’s A13 chip has support for custom instructions called AMX. From what I’d read, it’s for accelerating machine learning, though from what I understand, AVX is also used for this. Is AMX sort of Apple’s take on SVE/AVX, or is it something different entirely?

Had to look AMX up because I had completely forgotten about it and don't know much about it.

But it does not seem like it's a replacement for AVX. Sure both AMX and AVX can be used for machine learning, but that (might) be like saying both a car and a bicycle can be used for transportation.

Apple doesn't even expose AMX to developers. Maybe they will in the future but yeah, doesn't seem like it will be used as a replacement for AVX.

 

 

When I said "It relies very heavily on AVX instructions, which I don't think Rosetta 2 can translate nicely to ARM instructions." I said that because Apple's developer documentation states that Rosetta can not translate AVX instructions. So it's flat out not supported.

The part about them "not translating nicely to ARM instructions" was speculation on my part about why Rosetta doesn't support it.

 

Quote from Apple's "About the Rosetta Translation Environment":

Quote

Rosetta translates all x86_64 instructions, but it doesn’t support the execution of some newer instruction sets and processor features, such as AVX, AVX2, and AVX512 vector instructions.

 

I don't know if that's a hardware or software limitation that might change in the future, but all we know for now is that if your code relies on AVX, it will not be supported in Rosetta 2.

 

 

 

As for Apple's "take on SVE/AVX", my guess is that Apple will just use SVE2 which will be introduced with ARMv9. Developers using the A12Z will have to target a mix of SVE and NEON for now (since SVE "1" lacks quite a bit of integer support and therefore isn't a full replacement for NEON. SVE2 fixes this, hence why it's a big deal).

 

I don't think you are allowed to create your own ARM instructions (although clearly Apple has, at least for internal use), but even if you are I find it unlikely that Apple will go through all the trouble to invent something when ARM already has a perfectly suitable technology for them to use (SVE2). There are just so many drawbacks to making your own instructions that I don't think it's worth it. Apple would still have to implement SVE2 support for ISA compatibility reasons, even if they roll their own instructions along side it. It would just eat up die space for no reason (that I can think of).

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, LAwLz said:

Geekbench 5 is much better than Geekbench 4 in that regard. Also, the results fairly closely mirrors SPEC, so I am willing to believe that this is actually very applicable to real world performance.

The issue isn't whether or not what is being measured is accurate it's down to how well the real applications actually end up working. Showing the Int or Float performance of the CPU is good still doesn't tell me anything about how real applications are going to perform when talking about a fundamental architecture switch.

 

Like you can show SPEC results of EPYC vs Xeon and see back and forward results there depending on what you are looking at but then when it comes to actual performance differences are much greater.

 

Benchmark software suffer from the cleanliness problem, applications and also users using the device aren't that. So I'd like to see direct native vs native application performance before assuming anything.

 

I also don't expect Apple to perfectly optimize their applications first time out the gate either so it'll be interesting to look at 6 month and 12 month revisits as well.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

The issue isn't whether or not what is being measured is accurate it's down to how well the real applications actually end up working. Showing the Int or Float performance of the CPU is good still doesn't tell me anything about how real applications are going to perform when talking about a fundamental architecture switch.

 

Like you can show SPEC results of EPYC vs Xeon and see back and forward results there depending on what you are looking at but then when it comes to actual performance differences are much greater.

 

Benchmark software suffer from the cleanliness problem, applications and also users using the device aren't that. So I'd like to see direct native vs native application performance before assuming anything.

Fair point. We probably won't be getting those types of benchmarks for a couple of years though so this is the best we got in the foreseeable future. I do however think that these types of benchmarks are a very good indicator, possibly the best indicator, of the raw performance of the hardware.

As it turns out (not that surprising), Apple's ARM processors are amazing. As it also turns out, their translation seems to be really good too.

 

 

6 minutes ago, leadeater said:

I also don't expect Apple to perfectly optimize their applications first time out the gate either so it'll be interesting to look at 6 month and 12 month revisits as well.

I kinda do expect them to be very well optimized right out of the gate.

You have to remember that neither Apple nor third party developers have been sitting and writing their OS or applications in assembly (other than some small parts, maybe), not even for x86. So what matters is the compiler and how well it can optimize the code. Well as it turns out, Apple have about 13 years of experience making and compiling ARM code. They are the authors and maintainer of some small little projects you might have heard of called Clang and LLVM.

If any company out there knows how to write well optimized ARM code, it's Apple.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, leadeater said:

Like you can show SPEC results of EPYC vs Xeon and see back and forward results there depending on what you are looking at but then when it comes to actual performance differences are much greater.

Itanium is the future!

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

I kinda do expect them to be very well optimized right out of the gate.

You have to remember that neither Apple nor third party developers have been sitting and writing their OS or applications in assembly (other than some small parts, maybe), not even for x86. So what matters is the compiler and how well it can optimize the code. Well as it turns out, Apple have about 13 years of experience making and compiling ARM code. They are the authors and maintainer of some small little projects you might have heard of called Clang and LLVM.

If any company out there knows how to write well optimized ARM code, it's Apple.

I still expect there are going to be cases where some optimization is going to need to be done to address odd issues that might just be things like data caches getting trashes or exceeded under more edge cases. Even Intel suffers from the same thing with the Intel compiler and their own architectures and they barely change anything between each one, though to be fair in regards to Skylake-SP that was a large change in memory architecture and the move away from Ring Bus. You can have all the enigeers and decades of expertise but you'll still have bits and pieces that need fine tuning.

 

It's not like I think it's going to hugely matter though, I'm not talking like an export from Final Cut is going to go from 1 hour to 30 minutes but I wouldn't be surprised if there is a couple percent to be gain there but more importantly little improvements to time line interactions and applying/adjusting effects, just little optimizations that might get rid of slight hitches that improve user experience.

 

Further to that generally speaking computing power is already much greater than most people need so it's really not going to matter even if first generation of Apple silicon is 20% slower than Intel or AMD as it's still greater than what would impede people ability to use the device or find is sub standard performance. Most people genuinely aren't going to care about that sort of difference at the end of the day and I already think actual Mac users don't care about GeekBench or Cinebench numbers, they are already saving more time with the greater usability and simplification on offer from things like Final Cut compared to other options and are choosing Mac/Apple for those reasons and not for bleeding edge hardware performance, if they happen to end up with both then all the better but I still doubt Apple is going to be release CPUs with as broader performance and application flexibility as Intel and AMD for a good while yet and it's more going to be the art of marrying the hardware and software together to get the best out of both which is already what Apple does.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, justpoet said:

Itanium is the future!

Well, that Russian Erebus CPU seems poised to be the future over there. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×