Jump to content

Thoughts where intel and amd will go to compete with Apple silicon?

I agree with a the couple of you that said that Microsoft is more of a competitor to Apple then either of Intel or AMD.  You could also look at it the other way around.  Intel and AMD make more than just 2 types of CPU. (I'm talking about Apple just making the bionic, and all it's variants, for their mobile devices and the M1, with all it's variants, in their laptop/desktop stuff)  With the broad type of both x86 and RISC chip AMD and Intel make why would they worry about A: Apples Bionic line up, cause they're not in that space anyway and B: The M1 which appears in just a handful of Apples own products and isn't ever going to make a dent in the 'Windows' market anyways

 With all the Trolls, Try Hards, Noobs and Weirdos around here you'd think i'd find SOMEWHERE to fit in!

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, J-from-Nucleon said:

Correct me if I'm wrong but I remember reading somewhere that Intel and Apple have bought all of TSMCs 3nm fab space. So wouldn't that put them on equal footing in terms of node advantage?

The obvious exception would be if intel uses that 3nm fab space solely for it's GPU line up.

Fair enough, I've just seen it, and sounds like there are also CPUs planned for those 3nm. We shall see what comes out of it and how competitive it is.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

On process, TSMC 3nm is last time I checked due to start production later this year although only in volume in 2023. Intel doesn't expect to get their fab processes competitive with TSMC until 2025, if things go to plan. So Intel will be in part relying on TSMC for leading edge parts at least until then, and TSMC's hope is they can do such a good job Intel will stick with them even when Intel have their competing tech going. 3nm will be at a premium to Intel so they're more likely to prioritise it on high value stuff, not consumer tier, unless they want to make a show of what they can do design wise.

 

The Apple silicon approach is open to AMD and Intel, but the question is at what cost? Intel have already demonstrated extreme pick and mix with Ponte Vecchio. Not all parts need to be on the same process, and you can glue what you want together. But it wont be cheap. Nor is any of the decent apple stuff. I've paid more for an iPad than full laptops.

 

The nearest we might have in SoC terms might be the current gen console APUs from AMD, but they're built down to a price. Would people want a high end equivalent? I'm not sure it'll work on large desktop, since you lose expandability like you do with most apple stuff. High perf laptops are a maybe, if it isn't too niche in itself. Newer PCIe busses reduce the gap between CPU and GPU. Ram still leaves significant room for improvement and I've long wished for higher end consumer tier (not HEDT) to go more memory channels as standard. If you want expandability, you have connectivity to consider.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, porina said:

and TSMC's hope is they can do such a good job Intel will stick with them even when Intel have their competing tech going.

Which is possible, given that intel is opening their fabs to other companies, so they don't necessarily need to build everything in-house.

 

29 minutes ago, porina said:

3nm will be at a premium to Intel so they're more likely to prioritise it on high value stuff, not consumer tier, unless they want to make a show of what they can do design wise.

If they manage to build a design akin to AMD's where they can repurpose the same base die in many different products, I can see consumer parts becoming a thing (as long as enterprise demand isn't high enough to swallow the whole lineup).

 

31 minutes ago, porina said:

The nearest we might have in SoC terms might be the current gen console APUs from AMD, but they're built down to a price. Would people want a high end equivalent? I'm not sure it'll work on large desktop, since you lose expandability like you do with most apple stuff. High perf laptops are a maybe, if it isn't too niche in itself. Newer PCIe busses reduce the gap between CPU and GPU. Ram still leaves significant room for improvement and I've long wished for higher end consumer tier (not HEDT) to go more memory channels as standard. If you want expandability, you have connectivity to consider.

I'd pay heavy money for a x86-equivalent of a current macbook. No need for it to even be high performance, a M1-like performance with the same battery life would be enough for my mobile use, just give me that wide af memory bus in a single, efficient package.

 

As for a desktop, however, since I do need to make use of PCIe (and Nvidia GPUs to me more specific), I'd need to give away the unified memory arch (or maybe not, I wouldn't mind having both and paying a premium), but I could live with a non-upgradeable RAM in a fast SoC (as long as it was enough for me, 128~256gb) that could chew through compile times fast.

 

I'm seriously thinking about buying a M1 macbook as soon as asahi gets proper power management and GPU accel.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/10/2022 at 7:48 PM, SuperShermanTanker said:

his is an interesting thing to think about what they may do.

Im fairly sure that both Intel and AMD are working on some ARM CPU's. But if you're talking about x86 competing with Apple Silicone, to be honest does it mater? Apple will NOT sell its CPU to third parties. Apple also doesn't expect to sell the number of machines like Dell, HP and Lenovo do. Many organizations still rely on Windows and while an ARM version of Windows exists, it kinda sucks. Due to backwards compatibility Microsoft will not be able to drop x86 support. So in reality Intel and AMD compete between themselves and Apple is off on its own Island much like Nintendo is doing its own thing. 

 

The big question however is will Apple be able to grow its marketshare in a significant way? If that happens then Microsoft could be in trouble. 

I just want to sit back and watch the world burn. 

Link to comment
Share on other sites

Link to post
Share on other sites

What's really funny about it all is that they're not in competition with each other despite the fact that the Apple Silicon is technically a CPU made to be used in a computer. Apple will NEVER let anyone else make use of it, nor will they ever engineer a processer for another company. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/22/2022 at 2:31 PM, wolverine104 said:

What's really funny about it all is that they're not in competition with each other despite the fact that the Apple Silicon is technically a CPU made to be used in a computer. Apple will NEVER let anyone else make use of it, nor will they ever engineer a processer for another company. 

I don't know where this thinking that AMD and Intel will try to complete with Apple producing high end SoCs came from  in the first place. But you are right.

Link to comment
Share on other sites

Link to post
Share on other sites

The only way AMD or Intel, and/or Nvidia would compete with Apple at the monolithic energy efficient CPU+GPU+accelerator SoC game is if they unified on chiplets and systems wouldn't be configurable anymore you would just buy the chiplet combinations on an interposer and throw it into a system with some DRAM. 

 

Also, I wouldn't say Apple is really that much better than the hardware majors, AMD, Intel, Nvidia in hardware design wins.

 

GA102 GPU is 628.4mm sq and the 3090 costs 1500$ for something that doesn't have all the cores working on a more mature process.

M1 Ultra is 840mm sq at 5nm, half the new Mac Studio's BOM cost must be the die itself since yields must not be that high at that size.

 

Apple doesn't need a huge profit margin on its hardware since they have a software ecosystem whereas the hardware majors have to make profits off the hardware itself, for the most part. So, Apple can afford to throw area at designs unlike the hardware majors. Plus, with their ecosystem lock they can easily get consumers to justify the high price tags.

 

Additionally, if you can justify wider more parallel architectures for area cost/ yield at the same performance, then you can improve efficiency by running at lower speeds which means lower voltages which translates to less energy for the same operations. Power = activity * Cap * Voltage ^2 * freq, Voltage effect is quadratic so lower voltages are much more efficient to run at. Apple's massive area budgets are partially a result of them trying to create chips that fit into very thermally limited designs, so they design wider cores run a bit slower for similar if not better performance. But they have different economics that allow them to make such huge chips. 

 

AMD/Intel/Nvidia don't care about your power bill, so they are very happy to reduce area and run faster at higher voltages vs M1 to optimize perf vs area/yield since once they make the sale there isn't a profit incentive on for them to care about power on your end. 350W 3090 is an example... This is why datacenter CPUs run slower, power goes into datacenter TCO, so they are designed with bit larger area, more cores, slower clocks, lower voltage. The only power restrictions for Intel/AMD/Nvidia are the system integrators have limits.

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×