Jump to content

Thoughts where intel and amd will go to compete with Apple silicon?

Intel already went hybrid architecture but apple clearly has the advantage in performance per watt. This is an interesting thing to think about what they may do.

Link to comment
Share on other sites

Link to post
Share on other sites

Unless they build a monolithic die with ram built into the SoC with a really large bus width like the M1, the best they can do is match the M1 once they reach 5nm, but by then apple will be on 3nm and still have the lead.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, SuperShermanTanker said:

Intel already went hybrid architecture but apple clearly has the advantage in performance per watt. This is an interesting thing to think about what they may do.

do they need to compete anymore tho?

 

check your cables!
samsung s10e | android enthusiast | Tech blogger | XC Athlete | Surface pro 4

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, igormp said:

Unless they build a monolithic die with ram built into the SoC with a really large bus width like the M1, the best they can do is match the M1 once they reach 5nm, but by then apple will be on 3nm and still have the lead.

I wonder how far intel and amd can tune their mobile chips in terms of efficiency to compete with apple, actually really curious how well the intel E cores work

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, SuperShermanTanker said:

I wonder how far intel and amd can tune their mobile chips in terms of efficiency to compete with apple, actually really curious how well the intel E cores work

The CPU isn't the only problem, you have tons of wasted energy elsewhere. There are also other limitations apart from the microarch, such as the node the devices are being fabbed.

 

10 minutes ago, TrigrH said:

x86 vs ARM

 

arm will always be more efficient, whilst x86 is more versatile.

That makes no sense, the ISA doesn't strictly define how power efficient something is, specially given how it's just a facade to the underneath µarch.

One can also add as many extensions they want to their arm cpu, and do a ton of macro op fusion pretty much like modern x86 CPUs do.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, TrigrH said:

x86 vs ARM

 

arm will always be more efficient, whilst x86 is more versatile.

ARM is exactly as versatile as x86. What x86 has is backwards compability. 
 

But what Apple has shown is that you might as well fix the BC with software. A 10 year old (or older) unsupported software might as well be run in an translation layer and it will still run as good or better as the day it was released.

Link to comment
Share on other sites

Link to post
Share on other sites

Intel and AMD aren't competing with Apple Silicon, they're now in different spaces.

 

Want MacOS? you get apple silicon

Need programs that aren't on MacOS, or just don't want to use MacOS? you can't get Apple Silicon anyway.

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

Oh boy, would I love to get a single board computer with an M1 or better slapped onto it. A distant pipe dream, but insane potential. 
 

As far as Apple Silicon vs the x86 duo, it’s no secret that Apple pursues very wide CPU architectures to achieve a high degree of performance at a given clock speed. It is said that wider architectures are more difficult to fully utilize in a single thread though. SMT/Hyperthreading addresses this for x86, and they’re still narrower than Apple’s designs. Apple however, pretty much only ever allows code to run on their platforms that have gone through their own compilers. So I wonder if this difference in control makes it easier for Apple to push for, and properly utilize higher efficiency designs? Surely if it was just a matter of adding more decode and other arithmetic units into the pipeline, and some cache for it all, Intel and AMD would be keeping pace. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

Apple controls the entire stack from OS to circuit board. AMD and Intel don't. While they both tend to bend over for Windows, Apple will always have the more efficient platform for their apps. You see the benchmarks of Final Cut on M1  over Premiere on x86? Its scary.

 

Intel doesn't want to compete with Apple. Not their market. If anything Microsoft will start designing their own chips.

 

Ideally I want to see platforms where RISC and CISC (x86) reside as multiple core and apps can be assigned to use the processor that they are optimally coded for. This actually exists now quasi form in that some apps use GPU vs CPU for some processes. 

 

For the record I ran NT on RISC and Alpha back in the day. It screamed.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

They can’t. Apple has compete vertical intervention which Intel and AMD cannot hope to match. There’s just optimisation they cannot do which will only widen the gap when Apple drops x86 chip support 

Link to comment
Share on other sites

Link to post
Share on other sites

They are not in the same market, so it doesn't matter if they can "compete" with Apple Silicon. And I doubt Apple will ever sell their chip for other makers to use.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Apple doesn't produce x86 chips (yet) so in my view they aren't in direct competition with Intel/AMD.

 

Comparing x86 architecture vs ARM is like comparing apples and oranges.

 

MAIN SYSTEM: Intel i9 10850K | 32GB Corsair Vengeance Pro DDR4-3600C16 | RTX 3070 FE | MSI Z490 Gaming Carbon WIFI | Corsair H100i Pro 240mm AIO | 500GB Samsung 850 Evo + 500GB Samsung 970 Evo Plus SSDs | EVGA SuperNova 850 P2 | Fractal Design Meshify C | Razer Cynosa V2 | Corsair Scimitar Elite | Gigabyte G27Q

 

Other Devices: iPhone 12 128GB | Nintendo Switch | Surface Pro 7+ (work device)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Alcarin said:

Apple doesn't produce x86 chips (yet) so in my view they aren't in direct competition with Intel/AMD.

 

Comparing x86 architecture vs ARM is like comparing apples and oranges.

 

I dislike the comparison of Apples and Oranges, because at the end of the day, the end flavor, or output, is literally the same. You calculate 4 x 4, you’re going to get 16, no matter the ISA. It would be like if you made pies out of the fruits, and both pies tasting literally the same. And in all honesty, even bringing ISA to the discussion is not worth much anyway, as a very wide array of different CPU designs can be made that is compatible to a given ISA. Apple’s ARM compatible designs tend to mop the floor with ARM’s own reference designs. 
 

I want to know where and how Apple is able to achieve such a performance/efficiency  advantage, and why haven’t Intel/AMD pursued such aggressive designs themselves. Not that the x86 contemporaries are bad designs at all. If software is a big part of being able to effectively use wider architectures (and with Apple’s extensive control, this is very likely), then it could simply be that Intel/AMD already have exceptionally efficient designs for the use case, just that said use case doesn’t lend itself well to effectively using wider designs, limiting options for performance improvements. (The employment of SMT/Hyperthreading can infer this while being notable absent on Apple's hardware). 

 

I feel these answers are important, as they can tell us the direction that computing may move toward in the future. If Intel/AMD are struggling to bring more performance to the table, perhaps the use case itself needs to change to facilitate it? 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think Intel or AMD are going to be on the bleeding edge again in regard to efficiency for a long time..

 

My M1 Max laptop is virtually just as fast as my 5900X + 5700XT desktop workstation at my office when using Capture One and Photoshop. The M1X is actually faster in C1 when processing out JPEG's and TIFF's... All while pulling maybe 60w total power when the workload is maxed out to the tits. 

Work Rigs - 2015 15" MBP | 2019 15" MBP | 2021 16" M1 Max MBP | Lenovo ThinkPad T490 |

 

AMD Ryzen 9 5900X  |  MSI B550 Gaming Plus  |  64GB G.SKILL 3200 CL16 4x8GB |  AMD Reference RX 6800  |  WD Black SN750 1TB NVMe  |  Corsair RM750  |  Corsair H115i RGB Pro XT  |  Corsair 4000D  |  Dell S2721DGF  |
 

Fun Rig - AMD Ryzen 5 5600X  |  MSI B550 Tomahawk  |  32GB G.SKILL 3600 CL16 4x8GB |  AMD Reference 6800XT  | Creative Sound Blaster Z  |  WD Black SN850 500GB NVMe  |  WD Black SN750 2TB NVMe  |  WD Blue 1TB SATA SSD  |  Corsair RM850x  |  Corsair H100i RGB Pro XT  |  Corsair 4000D  |  LG 27GP850  |

Link to comment
Share on other sites

Link to post
Share on other sites

Didn't AMD sent themselves in the Red by designing their own ARM CPU at one time?

Link to comment
Share on other sites

Link to post
Share on other sites

How much do you guys think software code optimization would help here? My feeling is that it would be a signficant boost for x86 world to set some software/code optimization targets. I have no idea how, it's certainly not as easy as setting laptop specs like for Intel "ultrabook", but just a thought.

 

Also, it probably isn't a goal for Intel or AMD to set, but maybe for Microsoft as the dominant OS developer. Linux would probably already be well optimized as an OS, but I have no idea if for example any big video editors are available or optimized on that platform.

 

I don't know anything about code optimization, or on what level of code the optimization is most needed/effective (firmware, drivers, OS, runtime environments, software...). It's just that always when I hear that some new piece of software has been optimized for M1, I find myself thinking whether or not they ever really optimized it for x86 even, or just went with "there's plenty of power, so don't care" attitude...

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/12/2022 at 6:31 PM, Alcarin said:

Apple doesn't produce x86 chips (yet) so in my view they aren't in direct competition with Intel/AMD.

 

 

 

Don’t think they will - as they now do their own! 
 

im a happy owner of the M1 pro chip and it’s lightning fast - faster than my windows pc. 
 

downside? Limited amount of games, some software needs to be emulated or use wine to run. 

 

MSI B450 Pro Gaming Pro Carbon AC | AMD Ryzen 2700x  | NZXT  Kraken X52  MSI GeForce RTX2070 Armour | Corsair Vengeance LPX 32GB (4*8) 3200MhZ | Samsung 970 evo M.2nvme 500GB Boot  / Samsung 860 evo 500GB SSD | Corsair RM550X (2018) | Fractal Design Meshify C white | Logitech G pro WirelessGigabyte Aurus AD27QD 

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/11/2022 at 3:51 AM, igormp said:

once they reach 5nm, but by then apple will be on 3nm and still have the lead.

Correct me if I'm wrong but I remember reading somewhere that Intel and Apple have bought all of TSMCs 3nm fab space. So wouldn't that put them on equal footing in terms of node advantage?

The obvious exception would be if intel uses that 3nm fab space solely for it's GPU line up.

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/12/2022 at 8:45 PM, Action_Johnson said:

I don't think Intel or AMD are going to be on the bleeding edge again in regard to efficiency for a long time..

 

My M1 Max laptop is virtually just as fast as my 5900X + 5700XT desktop workstation at my office when using Capture One and Photoshop. The M1X is actually faster in C1 when processing out JPEG's and TIFF's... All while pulling maybe 60w total power when the workload is maxed out to the tits. 

That is partially because AMD and Intel aren't really interested in efficiency in the desktop side, and even on the laptop side their focus is usually in the performance. The 5900X can achieve 90%+ of its stock multi-core performance while in ECO mode(~85W) but AMD leaves it way out of it's most efficient point.

Maybe I'm wrong but isn't image processing usually single-threaded or limited to few cores? Then it would make sense why the M1 Max would be faster than the 5900X, as when the talk is about multi-core the 5900X should be quite a bit faster than the M1 Pro/Max in most tasks.

Link to comment
Share on other sites

Link to post
Share on other sites

I have a feeling its going to end up like milder version Qualcomm chips vs Apple A series chips. Because Apple doesn't sell its tech, the "competition" isn't really a competition. They can just go with "it is what it is" and not put any real efforts to catch up to anyone.

 

And I say milder because there is still AMD vs Intel competition, and in PC space they wouldn't be able to slack off too much because even though it runs on different platforms, many softwares are available on both.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, KaitouX said:

That is partially because AMD and Intel aren't really interested in efficiency in the desktop side

The fact is they should be. The higher end you go the more important perf/w becomes since you start to hit the limits of power delivery to the chip (how to get power distorted through the silicon and supporting substrate and how to cool it).

if you have a chip arc that performs the same but draws 1/2 the power that means if your willing to spend the transistors given the same limits (something like a macPro) you can produce a chip that is much more powerful than others. 
 

23 minutes ago, KaitouX said:

Maybe I'm wrong but isn't image processing usually single-threaded or limited to few cores? 

Depends a lot on the application, image pressing is in fact a task that is very well suited to multi threaded workloads and even more suited to large vector operations (AVX etc).  Adobe use this a LOT of x86 systems and are yet to properly make use of the simlare AMX units on the M1 arc, the perf benefit that M1 has for these tasks comes from the much larger bandwidth of the chip and there cache and internal fabric.  

Remember the M1 Max cpu under full load (cpu, and AMX units does not draw more than 30W including memory).


 

 

1 hour ago, J-from-Nucleon said:

Correct me if I'm wrong but I remember reading somewhere that Intel and Apple have bought all of TSMCs 3nm fab space. So wouldn't that put them on equal footing in terms of node advantage?

 

Intel purchased some 3nm capacity but they are a lower priority customer than apple, apple funded the building of the 3nm fab so will have priority access likly intel gets access after that. TSMC do not issue future node orders with fixed dates and units, the higher priority customer you are the higher on the list you are when units are available. This is how apple have been able to maintain 5nm volumes pushing other vendors orders that were in place since apple is a higher priority customer to TSMC. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Tigerleap said:

 

 

I don't know anything about code optimization, or on what level of code the optimization is most needed/effective (firmware, drivers, OS, runtime environments, software...). It's just that always when I hear that some new piece of software has been optimized for M1, I find myself thinking whether or not they ever really optimized it for x86 even, or just went with "there's plenty of power, so don't care" attitude...

when people say 'optimised for M1' all they mean is runs on M1 without rosseta2 (x86 translation tool), very few apps are truly optimised for M1. If you look at apps like photoshop etc the code that is running on M1 is the vanilla non optimised pathways just classical ARM loops, compared to on x86 were there will be years and years of hand crafted optimisation, I would not be at all surprised if there is hand written assembly in the code base.  

I am yet to see and app announce it being `optimised` for M1 to mean optimised in the sense that they have done more than click compile vanilla C/C++ and shipped it. It is clear Adobe apps are not making good use of the hardware at all, not using things like the AMX (matrix coprososor) at all, using this would provide a massive boost in perf it provides unto 2 TFlops of FP80 performance (for matrix operations) on the M1 and 4 Tflops on the M1 Pro/Max this is a LOT given it runs inline within the cpu and shares the L2 cache so does not have the latency penalty of calling out to the GPU.  I expect even in 10 years time very few third party apps will be truly considered  `optimised` for apples chips, unless the market % changes a LOT.  The x86 code bases will continue to have there hand crafted pathways, for big vendors this these are commonly even written by intel engineers that are provided to companies like Adobe to help the hand craft there code (apple does not do this! and apple really does not want devs to write assbely level code. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/13/2022 at 7:29 AM, Zodiark1593 said:

I want to know where and how Apple is able to achieve such a performance/efficiency  advantage, and why haven’t Intel/AMD pursued such aggressive designs themselves. 

 

years and years (over 10) of only considering a new solution valid if it improves perf/W.  You don't get to this overnight this has been a fixed goal of the silicon team at apple to disregard any pathway that impacts perf/W negatively even a little bit.  

 

 

On 3/13/2022 at 7:29 AM, Zodiark1593 said:

f software is a big part of being able to effectively use wider architectures (and with Apple’s extensive control, this is very likely

Its not software, just look at the linux on M1 performance, even through they are still laking a load of custom power management controllers (can't controle core boost etc, can't put the cpu into deep sleep states etc) they are getting very good perf, many unix apps run faster on linux on M1 than macOS on M1! if anything apples software is getting in the way of the perf not improving it! 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×