Jump to content

Geekbench running on Rosetta 2 in Apple's DTK outperforms the Surface Pro X with native ARM64 Geekbench

captain_to_fire

I'd be interested to see a hybrid x86/ARM CPU. Best of both worlds?

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, justpoet said:

Itanium is the future!

I know this was a sarcastic post, but with Itanium Intel at least tried. It was still fairly widely adopted in corporate server space. I am a strong belivere that if it had edged out x86 (as was the aim afaik) we would be in a better place than we are now with x86.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Spindel said:

I know this was a sarcastic post, but with Itanium Intel at least tried. It was still fairly widely adopted in corporate server space. I am a strong belivere that if it had edged out x86 (as was the aim afaik) we would be in a better place than we are now with x86.

 

 

Intel has done this kind of failure three times now. Itanium was only the second. StrongARM XScale was the third, and there still might be a fourth in there with the MIPS/ARM cores part of their Altera FPGA SoC's.

 

Intel first was developing the i860, for Windows NT, and that went poof.

 

Intel sold their ARM business to Marvel in 2006, but it was originally acquired from DEC 1998. Intel still has an ARM license.

 

The Itanium was the first serious attempt at replacing x86, and AMD snatched it from them by releasing it's own 64-bit x86-compatible chips. Introduced in 2001 and discontinued in 2021. HP basically the sole customer (for HP-UX.) Ironicly, Itanium is immune from Spectre and Meltdown.

 

Like Intel may finally have to ditch it's own instruction set if other OEM's decide that the only way to keep getting faster systems is to use ARM parts, even with Windows. So if any time is a good time to jump back into doing ARM parts, Intel can do it.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/3/2020 at 6:10 PM, Zodiark1593 said:

Apple’s A13 chip has support for custom instructions called AMX. From what I’d read, it’s for accelerating machine learning, though from what I understand, AVX is also used for this. Is AMX sort of Apple’s take on SVE/AVX, or is it something different entirely? Based on what I know, I’ve no idea. Just some guesses. 
 

https://www.realworldtech.com/forum/?threadid=187087&curpostid=187092

General AVX is meant for SIMD instructions. Those can help ML somewhat, but you can't compare those to AMX (even though some intel cpus have an DL boost extension to AVX). AMX is more like the tensor cores found in newer nvidia cards.

 

On 7/3/2020 at 6:41 PM, LAwLz said:

Had to look AMX up because I had completely forgotten about it and don't know much about it.

But it does not seem like it's a replacement for AVX. Sure both AMX and AVX can be used for machine learning, but that (might) be like saying both a car and a bicycle can be used for transportation.

Apple doesn't even expose AMX to developers. Maybe they will in the future but yeah, doesn't seem like it will be used as a replacement for AVX.

 

 

When I said "It relies very heavily on AVX instructions, which I don't think Rosetta 2 can translate nicely to ARM instructions." I said that because Apple's developer documentation states that Rosetta can not translate AVX instructions. So it's flat out not supported.

The part about them "not translating nicely to ARM instructions" was speculation on my part about why Rosetta doesn't support it.

 

Quote from Apple's "About the Rosetta Translation Environment":

 

I don't know if that's a hardware or software limitation that might change in the future, but all we know for now is that if your code relies on AVX, it will not be supported in Rosetta 2.

 

 

 

As for Apple's "take on SVE/AVX", my guess is that Apple will just use SVE2 which will be introduced with ARMv9. Developers using the A12Z will have to target a mix of SVE and NEON for now (since SVE "1" lacks quite a bit of integer support and therefore isn't a full replacement for NEON. SVE2 fixes this, hence why it's a big deal).

 

I don't think you are allowed to create your own ARM instructions (although clearly Apple has, at least for internal use), but even if you are I find it unlikely that Apple will go through all the trouble to invent something when ARM already has a perfectly suitable technology for them to use (SVE2). There are just so many drawbacks to making your own instructions that I don't think it's worth it. Apple would still have to implement SVE2 support for ISA compatibility reasons, even if they roll their own instructions along side it. It would just eat up die space for no reason (that I can think of).

AVX and newer extensions are still protected under patents, so there's that.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, igormp said:

General AVX is meant for SIMD instructions. Those can help ML somewhat, but you can't compare those to AMX (even though some intel cpus have an DL boost extension to AVX). AMX is more like the tensor cores found in newer nvidia cards.

 

AVX and newer extensions are still protected under patents, so there's that.

It appears Intel is implementing AMX (Advanced Matrix Extensions) instructions in upcoming designs as well. 
 

Beyond machine learning, I wonder if the Matrix Extensions will find a use in productivity tasks (3D rendering, video encoding, etc), or if the precision just isn’t there. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×