Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

TSMC Could Rollout 2nm by 2023

On 11/17/2020 at 5:23 AM, PianoPlayer88Key said:

I was actually the one who made that pic (somehow the quote name tags must have glitched), but maybe that could be somewhat remedied with stacking dice?  (Maybe that would also allow RTX 3090-quantity core counts on Socket AM5, assuming it's the same physical size as and compatible with cooolers from AM4, at like 2nm or 1.4nm? :))

 

🥁🥁🥁Let the CPU vs GPU core wars begin! 🔦🔥 (Come on, when I look for "torch" in emoji database, I want a stick with fire, not a "flashlight".)

 

you will need an interposer to increase the ccd count on am4, as they are already out of pins on the bottom of the IO die, shown by how each die only have half the writing bandwidth, and they are already at the maximum amount of pins per mm²

 

Link to post
Share on other sites
53 minutes ago, cj09beira said:

you will need an interposer to increase the ccd count on am4, as they are already out of pins on the bottom of the IO die, shown by how each die only have half the writing bandwidth, and they are already at the maximum amount of pins per mm²

 

 

AM4 is end of life though so not really an issue.

Link to post
Share on other sites
1 hour ago, CarlBar said:

 

AM4 is end of life though so not really an issue.

an am5 substrate would have similar issues unless they increase the size of the io die significantly

Link to post
Share on other sites
6 hours ago, cj09beira said:

an am5 substrate would have similar issues unless they increase the size of the io die significantly

 

*facepalm* I completely misread your first post i quoted and thought you where talking about socket pins. Please ignore. I blame typing that not long before i went to bed.

Link to post
Share on other sites
On 11/18/2020 at 3:09 AM, cj09beira said:

you will need an interposer to increase the ccd count on am4, as they are already out of pins on the bottom of the IO die, shown by how each die only have half the writing bandwidth, and they are already at the maximum amount of pins per mm²

 

I wonder if at some point we might be forced to have the some/all the RAM become part of the CPU to conserve pins on the motherboard and increase performance. In an effect like one channel of 32/16GB of RAM becoming a L4 cache, and the other channel being the off-die RAM. Then additional PCIe lanes can be on the CPU package.

 

as far as "smaller nm" goes, I think the stated limit was 2nm/1nm because the actual atoms are 0.2nm, so it starts getting difficult to actually produce something that has any structural strength to it. https://qz.com/852770/theres-a-limit-to-how-small-we-can-make-transistors-but-the-solution-is-photonic-chips/

 

The link suggests photonic chips which can be 20x faster, and that might end up being where we go at some point, but they'll likely also cost a lot when something commercially viable reaches consumers.

 

Link to post
Share on other sites
2 minutes ago, Kisai said:

I wonder if at some point we might be forced to have the some/all the RAM become part of the CPU to conserve pins on the motherboard and increase performance. In an effect like one channel of 32/16GB of RAM becoming a L4 cache, and the other channel being the off-die RAM. Then additional PCIe lanes can be on the CPU package.

 

as far as "smaller nm" goes, I think the stated limit was 2nm/1nm because the actual atoms are 0.2nm, so it starts getting difficult to actually produce something that has any structural strength to it. https://qz.com/852770/theres-a-limit-to-how-small-we-can-make-transistors-but-the-solution-is-photonic-chips/

 

The link suggests photonic chips which can be 20x faster, and that might end up being where we go at some point, but they'll likely also cost a lot when something commercially viable reaches consumers.

 

That may very well happen, but not any time soon. DDR5 modules are in production already, so it won't be see in the next few years at minimum. Which honestly, 15 years ago, some old timers speculated by 2020, but looks like 2022 from this point in time.

 

Interesting article I found about AI designing AI processors, a feature AMD and Intel could potentially use in the near future, we've seen AI design cars that where lighter, stronger and better designed to it's purpose. Perhaps this could be ideal for hardware beyond what humans can fathom. 

 

https://www.pcgamer.com/google-is-using-ai-to-design-ai-processors-much-faster-than-humans-can/

- If it ain't broken, don't fix it! -

- Your post codes and beep codes in the drop down below.

Spoiler

 

 

Link to post
Share on other sites
9 hours ago, Kisai said:

I wonder if at some point we might be forced to have the some/all the RAM become part of the CPU to conserve pins on the motherboard and increase performance. In an effect like one channel of 32/16GB of RAM becoming a L4 cache, and the other channel being the off-die RAM. Then additional PCIe lanes can be on the CPU package.

 

as far as "smaller nm" goes, I think the stated limit was 2nm/1nm because the actual atoms are 0.2nm, so it starts getting difficult to actually produce something that has any structural strength to it. https://qz.com/852770/theres-a-limit-to-how-small-we-can-make-transistors-but-the-solution-is-photonic-chips/

 

The link suggests photonic chips which can be 20x faster, and that might end up being where we go at some point, but they'll likely also cost a lot when something commercially viable reaches consumers.

 

i half expect amd to drop a stack of hbm into the am5 socket, to work as a sort of l4/Ram

Link to post
Share on other sites
10 hours ago, Kisai said:

I wonder if at some point we might be forced to have the some/all the RAM become part of the CPU to conserve pins on the motherboard and increase performance. In an effect like one channel of 32/16GB of RAM becoming a L4 cache, and the other channel being the off-die RAM. Then additional PCIe lanes can be on the CPU package.

 

I could see perhaps a SoC package that includes RAM as a chiplet, but not on the same die as the CPU. That because of yield rate. The greater the die size, the greater the chance for a fault rendering the chip either neutered in functionality or outright trashed. In fact, that's precisely why AMD went the chiplet route as they couldn't guarantee a large volume of monolithic dies at a low cost; the yield just wasn't there.

 

Edit: Also, the Pentium II was made in a cartridge format so it could hold off-package cache due to yield issues as well. But it ran half the frequency if I recall.

 

Cheaper, better, faster - pick 2.

Link to post
Share on other sites
16 hours ago, cj09beira said:

i half expect amd to drop a stack of hbm into the am5 socket, to work as a sort of l4/Ram

 

Assuming it can work through LGA connections i don't see why they couldn't. CPU on one substrate, HBM on another. Honestly long term i'm kinda expecting chiplet tech to go that way. IO die on one Substrate, then various other substrates holding CPU cores, GPU cores, HBM, e.t.c. on individual substrates all put into a single LGA type socket. Effectively allowing people to build their own custom skew CPU out of a range of baseline parts.

Link to post
Share on other sites
  • 3 weeks later...
On 11/21/2020 at 5:12 AM, CarlBar said:

 

Assuming it can work through LGA connections i don't see why they couldn't. CPU on one substrate, HBM on another. Honestly long term i'm kinda expecting chiplet tech to go that way. IO die on one Substrate, then various other substrates holding CPU cores, GPU cores, HBM, e.t.c. on individual substrates all put into a single LGA type socket. Effectively allowing people to build their own custom skew CPU out of a range of baseline parts.

sadly its more likely that the opposite happens and the y put all those diiferent dies on a single substrate making you have a set fixed amount of things like "ram" capacity, though i expect ddr will still be available to complement what you have on the cpu substrate

Link to post
Share on other sites

Can't wait for -0.1nm

I don't have money (I am not asking for money please don't make me change my signature)

 

Dream PC:

CPU: Ryzen 9 5900X

Motherboard: Asus Prime X570-Pro AM4

GPU: RTX 3090 Founders Edition

RAM: Corsair Vengance RGB Pro 32gb 3200MHz C14

SSD: Sabrent Rocket PCIe 4.0 2tb

HDD: Seagate Barracuda Pro 14tb

Cooler: ASUS ROG Strix LC 360 RGB AIO

PSU: ASUS Rog Thor 1200 RGB

Case: Corsair iCUE 220T RGB Airflow

Case Fans: Corsair LL120 RGB x6

Link to post
Share on other sites
On 11/17/2020 at 6:10 PM, Vishera said:

Sorry to disappoint you but this layout will have very high latency and the others you made.

You want the path to the IO and between dies to be as short as possible so chips that are far from the controller will have latency issues which i expect to be the bottleneck of this layout.

 

Well the bigger problem I see is that many chips just wouldn't even be possible. There's no way to actually archive being able to connect them all, it's already been brought up but die stacking would have to be used to increase the total number or alternatively each chiplet would need vastly few interconnect traces (wayyyy less). Downside to die stacking, heat. But there is already some tech to try and solve that but to combine through die cooling and die stacking all on a CPU remotely soon would result in the worlds most expensive CPU ever lol.

 

Spoiler

just-do-it.jpg

 

 

Also, @cj09beira

81637213.jpg

 

Sorry had to make this joke 😉

Link to post
Share on other sites
On 11/16/2020 at 3:35 PM, Letgomyleghoe said:

so when do the returns become little to nil, you can just keep shrinking forever, but when are we going to get new technology?

We're already VERY close to it. 

 

Dennard scaling is dead. Clock speed scaling is dead, cost per transistor is dead. 

 

At this point the main benefit remaining is performance/watt and even that is decelerating. 

We'll still have advances but we're REALLY at the point where there needs to be something new and big. 

R9 3900x; 64GB RAM | RTX 2080 | 1.5TB Optane P4800x

1TB ADATA XPG Pro 8200 SSD | 2TB Micron 1100 SSD
HD800 + SCHIIT VALI | Topre Realforce Keyboard

Link to post
Share on other sites
1 minute ago, comander said:

We're already VERY close to it. 

 

Dennard scaling is dead. Clock speed scaling is dead, cost per transistor is dead. 

 

At this point the main benefit remaining is performance/watt and even that is decelerating. 

We'll still have advances but we're REALLY at the point where there needs to be something new and big. 

I mean if you want to consider ARM close, sure, but as for moving off of silicon or x86/64, were still very far away.

Quote me for a reply, React if I was helpful, informative, or funny

 

AMD blackout rig

 

cpu: ryzen 5 3600 @4.4ghz @1.35v

gpu: rx5700xt 2200mhz

ram: vengeance lpx c15 3200mhz

mobo: gigabyte b550 pro 

psu: cooler master mwe 650w

case: masterbox mbx520

fans:Noctua industrial 3000rpm x6

 

 

Link to post
Share on other sites
1 minute ago, leadeater said:

Well the bigger problem I see is that many chips just wouldn't even be possible. There's no way to actually archive being able to connect them all, it's already been brought up but die stacking would have to be used to increase the total number or alternatively each chiplet would need vastly few interconnect traces (wayyyy less). Downside to die stacking, heat. But there is already some tech to try and solve that but to combine through die cooling and die stacking all on a CPU remotely soon would result in the worlds most expensive CPU ever lol.

 

  Reveal hidden contents

just-do-it.jpg

 

If you combine my layout with AMD's Epyc layout you will be able to get 14 CCD dies + IO die on a single package without any of those problems (in theory).

My layout:

  

On 11/17/2020 at 5:44 AM, Vishera said:

Zen4.png.e5ef77363fb73f4de08c73eeb08f61de.png

 

AMD Epyc layout:

AMD-Epyc-Rome-7nm-64-nucleos.jpg

A PC Enthusiast since 2011
AMD Ryzen 5 2600@4GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2040MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites
3 minutes ago, Vishera said:

If you combine my layout with AMD's Epyc layout you will be able to get 14 CCD dies + IO die on a single package without any of those problems (in theory).

You still have the trace routing and layering issue to connect the dies to the IOD though, also the IOD would be much larger or have to find a way to shrink a lot. But like, I still want to see 16+ CCDs on a single package soon just because lol.

Link to post
Share on other sites
Just now, leadeater said:

You still have the trace routing and layering issue to connect the dies to the IOD though, also the IOD would be much larger or have to find a way to shrink a lot. But like, I still want to see 16+ CCDs on a single package soon just because lol.

Considering those IO dies are 14nm,AMD can have TSMC make 7nm or even 5nm ones.

A PC Enthusiast since 2011
AMD Ryzen 5 2600@4GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2040MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites
On 11/16/2020 at 5:58 PM, Deli said:

Meanwhile Intel remains using 14nm+++ in 2023.

Really more like 14nm+.

Cool tech tip inside:

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

elephants

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


 

Link to post
Share on other sites
Just now, ragnarok0273 said:

Really more like 14nm+.

Global Foundries as well (formerly the fabs of AMD :P)

A PC Enthusiast since 2011
AMD Ryzen 5 2600@4GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2040MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites
2 minutes ago, Vishera said:

Considering those IO dies are 14nm,AMD can have TSMC make 7nm or even 5nm ones.

Maybe, they are 14nm and 12nm btw. AMD choose to stay on GloFo for contractual reasons but also I/O doesn't respond to node shrinks as well as logic does so I'm not sure how much smaller it would get when also doubling the PCIe/IF connections. I think they would most likely move to something EUV based for the next IOD node personally as I think that helps a lot for this, though not 100% on that.

 

I can't remember when the GloFlo contract expires though.

Link to post
Share on other sites
59 minutes ago, Letgomyleghoe said:

I mean if you want to consider ARM close, sure, but as for moving off of silicon or x86/64, were still very far away.

Depends on how you define "very far away".
 
At this point I don't consider 10 years "very far away". It's very very possible we'll be looking at exotic materials in the 10-15 year range. In all likelihood we'll have SOME types of processors on different types of processes depending on tradeoffs and use case. 

R9 3900x; 64GB RAM | RTX 2080 | 1.5TB Optane P4800x

1TB ADATA XPG Pro 8200 SSD | 2TB Micron 1100 SSD
HD800 + SCHIIT VALI | Topre Realforce Keyboard

Link to post
Share on other sites
6 hours ago, Vishera said:

Considering those IO dies are 14nm,AMD can have TSMC make 7nm or even 5nm ones.

yes but the main problem is how many connections at the bottom of the die you can make, ryzen for example only has half the write throughput to the IO die because of this lack of pins, we could go denser but it would mean go for a silicon substrate between the organic substrate and the individual dies.

7 hours ago, leadeater said:

Well the bigger problem I see is that many chips just wouldn't even be possible. There's no way to actually archive being able to connect them all, it's already been brought up but die stacking would have to be used to increase the total number or alternatively each chiplet would need vastly few interconnect traces (wayyyy less). Downside to die stacking, heat. But there is already some tech to try and solve that but to combine through die cooling and die stacking all on a CPU remotely soon would result in the worlds most expensive CPU ever lol.

 

  Reveal hidden contents

just-do-it.jpg

 

 

Also, @cj09beira

81637213.jpg

 

Sorry had to make this joke 😉

its possible with a interposer, just isn't cheap, die cooling has me worried seems like it will be extremely easy to plug, btw have you seen the disassembly of the ibm cpu that David @Eevblog did? just amazing

PS: dont be sorry i loved it 🤣

Quote

 

 

Link to post
Share on other sites

Few thoughts from someone who knows fabs and processes. 

 

1. As mentioned before calling a process node by a size doesn't really mean anything.  While you might have a 2nm line somewhere in the design, the rest could be 7nm and it still might get called a 2nm.  Even trying to figure out where to measure is up in the air.  Don't put much stock into what a company calls their node.

2. Making the lines is easy.  Not hard to make a 2nm line, the registration is the tricky part.  Sure you can print a 1000 lines at 2nm.  Now process the wafer (etch, strip, diffusion, implant, etc) and now run it through the photo process again to put contacts on those lines when the wafer has changed size.   The math behind getting things to line up can be mind boggling. 

3. Single electron transistors have been around for a long time, the hard part to shrinks beside the registration is keeping a charge from escaping.  Current thinking is you need about a 3 molecule thick film to act as an effective insolative layer. 

4. You can get around the need for EUV for quite a while by being creative with your process stack.

5. How many pins can you have?  Depends on how you are communicating.   There are many signaling methods which require less pins to convey the data.

6. Anytime you think, why don't they do XYZ, there is a reason.  Even things which seem like they would be simple are scrutinized a 1000 ways to try and make it better.  If the industry isn't doing something it is because of cost, it causes other problems, it isn't reliable (remember it has to run on a lot of machines 24x7 and have a high percentage of success), or it might add complexity but doesn't offer any real world benefit.   There are other reasons as well but gives a general idea. 

7. Never count the industry out.  I remember hearing how we couldn't make chips that would go faster than 1Ghz,  can't do Iline processing at X size, etc.  No one really knows how far this tech can go. 

 

For those that don't work in the industry, take the articles you see with a HUGE grain of salt.  It is hard to believe how many seemingly reputable  places try and piece together bits and pieces and get it totally wrong or are very very off base.  I am sorry but anyone who is in college and in a semiconductor class, you are probably getting worse information than the internet surfers. 98% of the time the information is out of date or it isn't how it really works in the real world. 

 

 

 

Link to post
Share on other sites
5 hours ago, cj09beira said:

its possible with a interposer, just isn't cheap, die cooling has me worried seems like it will be extremely easy to plug, btw have you seen the disassembly of the ibm cpu that David @Eevblog did? just amazing

PS: dont be sorry i loved it 🤣

I saw it,it looks like IBM put half of the wafer on the package,

Considering how much a wafer costs - A single unit is probably tens of thousands of dollars.

A PC Enthusiast since 2011
AMD Ryzen 5 2600@4GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2040MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×