Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Next year's Apple designed chips might rely on TSMC's 5 nm+ process, and 4 nm process will arrive in 2022

Summary

In this rumor mill, Apple will still have to use TSMC's tock cycle 5nm+ for their 2021 devices

 

Quotes

Quote

As a general rule of thumb, a smaller process means higher efficiency and better performance when it comes to processors. “Looking ahead to 2021, in addition to Apple’s 5nm+ wafer input for the A15 Bionic SoC,” the report said regarding Apple’s switch to the 5nm+ process next year. “based on current data, Apple is highly likely to continue manufacturing its A16 SoCs with the 4nm process technology,” it added. In addition to the company leveraging TSNM’s smaller nodes, trial production for a test batch of AMD’s upcoming 5nm-based Zen 4 CPUs might also begin next year.

 

However, it also quite likely that Apple will rely on TSMC’s 5nm+ and 4nm process for manufacturing newer in-house silicon for future Macs, after creating the 5nm-based Apple M1 chip that has already made its way to the new Macbook Air, MacBook Pro and Mac Mini.

 

AppleApple A14

So basically it's a tock cycle for Apple devices so the only way Apple can increase performance of both the A and M series chips is to add more cores. There are even rumors that the M chip for the 2021 16" MacBook Pro will have additional two or even four cores which will even make it so much faster, i forgot where I read it so take it with a grain of salt. Makes me wonder how will the likes of Intel will respond. I just hope that the next year's M chips will have enough PCIe lanes or better yet, a 10 Gbit ethernet. It's likely that the refreshed iMac will have the new M chip with an iPad Pro like design. It's just about to get better since developers are now starting to write universal binary versions of their applications. Basically the only reason to get an Intel based Mac now is if the performance penalty under Rosetta 2 is big and one requires to use Boot Camp.

 

TSMC will likely deliver 4 nm by 2022 unlike Intel who got stuck in a tock cycle loop aka 14 nm+++++

 

Sources

Pocketnow, Trendforce

There is more that meets the eye
I see the soul that is inside

 

Making Windows Defender as good or even better than paid options

Link to post
Share on other sites

I think we've gone back nearly 25 years...I'm loving the competition springing back up in the CPU manufacturing space (RIP 14nm Skylake derivatives).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to post
Share on other sites
15 minutes ago, like_ooh_ahh said:

additional two or even four cores which will even make it so much faster

Imagine that...more cores to do more things.

Spoiler

Innovative 

Spoiler

Brave

Spoiler

Apple

 

 

 

Link to post
Share on other sites
9 minutes ago, TempestCatto said:

Imagine that...more cores to do more things.

 

I don’t know where the sarcasm comes from but more cores doesn’t always mean better performance. It’s like saying that the FX-9590 with 8c is better than an i7-6700K. 🙄

There is more that meets the eye
I see the soul that is inside

 

Making Windows Defender as good or even better than paid options

Link to post
Share on other sites
11 minutes ago, like_ooh_ahh said:
 

I don’t know where the sarcasm comes from but more cores doesn’t always mean better performance. It’s like saying that the FX-9590 with 8c is better than an i7-6700K. 🙄

In this case Apple's cores are kicking ass (much like Intel's were better than PowerPC cores when Apple switched). With the FX series its being ruled that they didn't have the stated core count (especially going off every other CPU they've been manufacturing since 1992-1993), so that's really 4 core, 8 ALU.
 

The 10900K 10c is better than an R7-5800X...

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to post
Share on other sites
13 minutes ago, Dabombinable said:

so that's really 4 core, 8 ALU.

It was more like 8 cores with only 4 fpus.

 

EDIT: Checked the block diagram again and you're right. They do share frontends.

Link to post
Share on other sites
3 minutes ago, gabrielcarvfer said:

It was more like 8 cores with only 4 fpus.

 

EDIT: Checked the block diagram again and you're right. They do share frontends.

The FX series was really AMD's strangest and most unique line of CPU. Should have stayed in the server space since AMD had known since the late 90's that the FPU was important for gaming and even some basic tasks - they created 3DNow! for that very reason (solved their weak FPU problems by intelligence instead of brute force).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to post
Share on other sites
6 hours ago, Dabombinable said:

The FX series was really AMD's strangest and most unique line of CPU. Should have stayed in the server space since AMD had known since the late 90's that the FPU was important for gaming and even some basic tasks - they created 3DNow! for that very reason (solved their weak FPU problems by intelligence instead of brute force).

Shared INT might have made more sense, more than shared FPU anyway. I think..... 🤷‍♂️

Link to post
Share on other sites

Well it's interesting they're always using latest process node and especially for SoC like such which packs a lot of different tech. It's not a huge die. It will e interesting to see what they come up with on high performance chips where power and thermals are no issue. 

Also we'll definitely see others have similar offerings in time. Not just a basic APU but SoC on a single die. Hybrid processors too, like Intel x86 cores with ARM ones, along with their GPU and other. Then Nvidia can do their own with their GPU and ARM core too. There's also AMD who's both CPU and GPU maker, and already have SoC like in new consoles that is potent. Can always add ARM into equation, they mentioned it too. Those dies are decent size for what they are. Imagine a scaled up version for desktop. Just imagine a massive single SoC die on a TR socket... I'd really like to see something like that. Really only issue I can see is yields, yeah I know "only" but depends what % that would be. If new tech would allow for more pure manufacturing who knows. No doubt we'll continue to see multi-chip approaches and SoCs where they fit. 

Ryzen 7 3800X | X570 Aorus Elite | G.Skill 16GB 3200MHz C16 | Radeon RX 5700 XT | Samsung 850 PRO 256GB | Mouse: Zowie S1 | OS: Windows 10

Link to post
Share on other sites

Apple be like

 

 

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 7 1700 3GHz 8-Core Processor @ 4Ghz
  • Motherboard
    GA-AX370-GAMING 5
  • RAM
    DOMINATOR Platinum 16GB (2 x 8GB) @ 3400mhz
  • GPU
    Aorus GTX 1080 Waterforce
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    BenQ - XL2430(144hz), Dell 24" portrait
  • Cooling
    MasterLiquid Lite 240
Link to post
Share on other sites
11 hours ago, Dabombinable said:

I think we've gone back nearly 25 years...I'm loving the competition springing back up in the CPU manufacturing space (RIP 14nm Skylake derivatives).

It's interesting how Intel's sleeping on laurels got them crashing down in CPU space just because they couldn't bring the process node down to smaller size. They clearly can hit really high clocks. Downsizing process node helps solving some things, but having better chip design is the other part. Just imagine if they actually gave a F and worked on IPC, like at all and have it at those clocks. AMD doesn't have clock advantage, but they worked hard on IPC side and made up so much ground they beat Intel in basically everything. I just wonder what will Intel do and how things will look like when they introduce first desktop chips with bigger IPC improvements...

 

Hopefully, Apple making their own chips will push both AMD and Intel to make larger strides in performance and efficiency. I'm not expecting sudden changes as they aren't really direct competitors, but you don't want to be losing x86 users to Apple laptops either. So it's still a competition in a way.

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3600MHz CL16 | PALIT RTX 3080 10GB GamingPro | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES

Link to post
Share on other sites
12 hours ago, Doobeedoo said:

Also we'll definitely see others have similar offerings in time. Not just a basic APU but SoC on a single die. Hybrid processors too, like Intel x86 cores with ARM ones, along with their GPU and other. Then Nvidia can do their own with their GPU and ARM core too.

Looks like Microsoft will have to take matters into their own hands and force its chip partners, starting with hardware based security. Perhaps @leadeater can simplify it for me. It took Microsoft long enough to realize that security controllers has to be inside the CPU, no wonder the likes of Veracrypt opted out of TPM support.

 

Quote

The Pluton design removes the potential for that communication channel to be attacked by building security directly into the CPU. Windows PCs using the Pluton architecture will first emulate a TPM that works with the existing TPM specifications and APIs, which will allow customers to immediately benefit from enhanced security for Windows features that rely on TPMs like BitLocker and System Guard. Windows devices with Pluton will use the Pluton security processor to protect credentials, user identities, encryption keys, and personal data. None of this information can be removed from Pluton even if an attacker has installed malware or has complete physical possession of the PC.

This is accomplished by storing sensitive data like encryption keys securely within the Pluton processor, which is isolated from the rest of the system, helping to ensure that emerging attack techniques, like speculative execution, cannot access key material. Pluton also provides the unique Secure Hardware Cryptography Key (SHACK) technology that helps ensure keys are never exposed outside of the protected hardware, even to the Pluton firmware itself, providing an unprecedented level of security for Windows customers.

The Pluton security processor complements work Microsoft has done with the community, including Project Cerberus, by providing a secure identity for the CPU that can be attested by Cerberus, thus enhancing the security of the overall platform.

Graphic showing the Microsoft Pluton security processor

One of the other major security problems solved by Pluton is keeping the system firmware up to date across the entire PC ecosystem.

 

Source: https://www.microsoft.com/security/blog/2020/11/17/meet-the-microsoft-pluton-processor-the-security-chip-designed-for-the-future-of-windows-pcs/

There is more that meets the eye
I see the soul that is inside

 

Making Windows Defender as good or even better than paid options

Link to post
Share on other sites
On 11/19/2020 at 10:22 PM, like_ooh_ahh said:

tock cycle loop aka 14 nm+++++

 

I laughed audibly. The sad part is that it's probably true.

Dominik W.

A child that wants really good computers, but can't afford them.

<div style="text-align:center;"><a href="https://robertsspaceindustries.com/orgs/UOLTT"><img src="<image url>" alt="Linus Tech Tips 

 

Link to post
Share on other sites
15 minutes ago, Dominik W. said:

I laughed audibly. The sad part is that it's probably true.

 

It is true. Intel normally does an architecture refresh followed by a node shrink, when 10m failed to mature on time they just held of on the next architecture update and kept tweaking the existing one. Not that an architecture improvement would have helped IMO. They're doing an architecture refresh next year on desktop and based on current info the performance per watt and maximum multi-core [performance isn't actually moving, it may even regress slightly. It's a pure single core performance boost.

Link to post
Share on other sites
On 11/20/2020 at 5:22 AM, like_ooh_ahh said:

Summary

In this rumor mill, Apple will still have to use TSMC's tock cycle 5nm+ for their 2021 devices

 

Quotes

So basically it's a tock cycle for Apple devices so the only way Apple can increase performance of both the A and M series chips is to add more cores. There are even rumors that the M chip for the 2021 16" MacBook Pro will have additional two or even four cores which will even make it so much faster, i forgot where I read it so take it with a grain of salt.

 

A12 was on 7nm

A13 was on 7nm+

 

This didn’t stop the A13 from being 20% faster than the A12 CPU-wise and 20-50% faster GPU-wise (sustained).

 

No cores added, just everything bigger and better:

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro-and-max-review/2

 

 

As for the 16” MBP, it’s already in advanced preparation for mass production at Quanta and it’s on the 5nm node, not 5nm+ node.

But yes it will have more cores. (double the performance cores, i.e. 8 instead of the 4 in the M1) 

Link to post
Share on other sites
2 hours ago, CarlBar said:

 

It is true. Intel normally does an architecture refresh followed by a node shrink, when 10m failed to mature on time they just held of on the next architecture update and kept tweaking the existing one. Not that an architecture improvement would have helped IMO. They're doing an architecture refresh next year on desktop and based on current info the performance per watt and maximum multi-core [performance isn't actually moving, it may even regress slightly. It's a pure single core performance boost.

Amd made a better cpu on the same process node by improving ipc through architecture change. I don't believe for a second that Intel is incapable of doing the same. They simply were to focused on fixing 10nm and didn't want to change architectures before the node shrink to allow more room for architecture changes. If they had gone and made architecture changes earlier it could have held up their gaming performance lead against amd but now even that is lost. 

Link to post
Share on other sites
3 hours ago, Brooksie359 said:

Amd made a better cpu on the same process node by improving ipc through architecture change. I don't believe for a second that Intel is incapable of doing the same. They simply were to focused on fixing 10nm and didn't want to change architectures before the node shrink to allow more room for architecture changes. If they had gone and made architecture changes earlier it could have held up their gaming performance lead against amd but now even that is lost. 

 

Intel and AMD have never used the same process node on anything vaguely recent.

Link to post
Share on other sites
11 minutes ago, CarlBar said:

 

Intel and AMD have never used the same process node on anything vaguely recent.

Let's be honest here. The difference between the process nodes Intel has been using since kaby lake are so minimal. Its not like their 14 nm process has resulted in any significant changes in clock speeds or power efficiency since kaby lake. The same could be said about AMD and the 7nm process that they used for their newest architecture. AMD even said it themselves that there wasn't any significant changes in the process between the 3000 series and the 5000 series and that the main reason for the ipc increase was the change in the architecture. 

Link to post
Share on other sites
17 minutes ago, Brooksie359 said:

Let's be honest here. The difference between the process nodes Intel has been using since kaby lake are so minimal. Its not like their 14 nm process has resulted in any significant changes in clock speeds or power efficiency since kaby lake. The same could be said about AMD and the 7nm process that they used for their newest architecture. AMD even said it themselves that there wasn't any significant changes in the process between the 3000 series and the 5000 series and that the main reason for the ipc increase was the change in the architecture. 

 

Neither of those claims about changes in process node is the least bit acurratte. Both have seen noticeable power efficiency, (and just as importantly clock speed), uplifts. We like to meme about the Intel +'s but according to Intels own statements 14nm is now significantly ahead of where 10nm was supposed to be in power efficiency and  peak clock speed. Where Intel is suffering is that their maximum yield able die size, (at an acceptable production cost and volume), puts a hard cap on the number of transistors they can squeeze onto a die, and that limits what they can do. 10nm once the get it online will let them squeeze a lot more transistors onto a mainstream chip with all the benefits that brings.

Link to post
Share on other sites
2 hours ago, CarlBar said:

 

Neither of those claims about changes in process node is the least bit acurratte. Both have seen noticeable power efficiency, (and just as importantly clock speed), uplifts. We like to meme about the Intel +'s but according to Intels own statements 14nm is now significantly ahead of where 10nm was supposed to be in power efficiency and  peak clock speed. Where Intel is suffering is that their maximum yield able die size, (at an acceptable production cost and volume), puts a hard cap on the number of transistors they can squeeze onto a die, and that limits what they can do. 10nm once the get it online will let them squeeze a lot more transistors onto a mainstream chip with all the benefits that brings.

Intel have certainly been crap for power efficiency for awhile now. As for clock speeds they have been stagnant as well. Sure you can maybe get a few 100 mhz faster than kaby lake but overall it is still around that 5ghz mark like it has been for quite some time. Again they should have made changes in architecture a long time ago instead of simple reiterations of skylake. 

Link to post
Share on other sites
4 hours ago, Brooksie359 said:

Intel have certainly been crap for power efficiency for awhile now. As for clock speeds they have been stagnant as well. Sure you can maybe get a few 100 mhz faster than kaby lake but overall it is still around that 5ghz mark like it has been for quite some time. Again they should have made changes in architecture a long time ago instead of simple reiterations of skylake. 

 

Crap for power efficency yes, not improving no. And no frequency has not been stagnant, at the start of 14nm they where at 4.2Ghz, now they're upto 5.3Ghz.

 

Could some 6700K's reach 5Ghz+ overclocked? Sure, but don;t mistake what some processors could do for every processor can do. Thats the difference between a 6700k and a 10900k. Every single 10900K that rolls out the factory is guaranteed to hit 5.3Ghz boost. 

 

Also again, improving the architecture whilst they're stuck on 14nm isn;t going to help them in anything but single core, and will probably hurt them in multi-core. This upcoming architecture update isn;t somthing Intel ;pulled out of their backsides. this is the architectures we where supposed to get after the 10nm shrink, which if things had gone according to plan we would have got 10nm with the 8000 series and the new architecture on 10nm with the 9000 series. They've definitely, (and self admittedly), tweaked it some more from the 9000 series baseline we where supposed to get but it's really a minor update to that, (the planned post 10nm shrink architecture), at best.

Link to post
Share on other sites
5 minutes ago, CarlBar said:

 

Crap for power efficency yes, not improving no. And no frequency has not been stagnant, at the start of 14nm they where at 4.2Ghz, now they're upto 5.3Ghz.

 

Could some 6700K's reach 5Ghz+ overclocked? Sure, but don;t mistake what some processors could do for every processor can do. Thats the difference between a 6700k and a 10900k. Every single 10900K that rolls out the factory is guaranteed to hit 5.3Ghz boost. 

 

Also again, improving the architecture whilst they're stuck on 14nm isn;t going to help them in anything but single core, and will probably hurt them in multi-core. This upcoming architecture update isn;t somthing Intel ;pulled out of their backsides. this is the architectures we where supposed to get after the 10nm shrink, which if things had gone according to plan we would have got 10nm with the 8000 series and the new architecture on 10nm with the 9000 series. They've definitely, (and self admittedly), tweaked it some more from the 9000 series baseline we where supposed to get but it's really a minor update to that, (the planned post 10nm shrink architecture), at best.

I said kaby lake not skylake. After kaby lake the frequency didn't change much. What you could overclock to didn't change much as well. Sure the single core boost changed but I am not even sure that is an architectural difference rather than a binning difference and the fact that with more cores you are more likely to have a single core that can hit higher frequency. The fact is that since kaby lake what you could expect out of an all core overclock was 5 ghz. 

Link to post
Share on other sites
11 hours ago, Brooksie359 said:

I said kaby lake not skylake. After kaby lake the frequency didn't change much. What you could overclock to didn't change much as well. Sure the single core boost changed but I am not even sure that is an architectural difference rather than a binning difference and the fact that with more cores you are more likely to have a single core that can hit higher frequency. The fact is that since kaby lake what you could expect out of an all core overclock was 5 ghz. 

 

Again what some chips could do is not the same as what all chips could do. They wheren't common but even some 9900K's wouldn't do 5Ghz all core. Claiming every chip that came out of the factory would do 5Ghz all cores is simply flat out wrong. Would a goodly percentage do it? Sure, but not remotely all.

 

Also yes the frequency climb is a binning thing. Thats what improving an existing process node does, it raises the average binning level of silicon produced on it meaning either power efficiency at a given clock speed, or peak achievable clock speed, will improve. Also more cores makes high clocks both easier and harder. It makes it easier on a single core, (but remember Intels boost behaviours has changed a lot over time, at one point only a single core could go above base frequency, now base frequency tends to be dropped to only after i believe it's 4 cores, are loaded). But achieving higher base clocks and higher multi-core boosts gets harder. more silicon means more total defects which means you have higher odds of getting a chip out of the fab that won't meet the same minimum spec as a smaller chip with fewer cores. Again this is Binning 101.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×