Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

TSMC Could Rollout 2nm by 2023

1 hour ago, ShrimpBrime said:

Then I was hoping maybe Zen 4 (i thought it'd be 5000 series hence filename) would have gone to, say, 88 cores on 5nm...

AMD_Ryzen_5000_7nm_IO_5nm_chiplet_x11.thumb.jpg.c4caee64862d1cadcd460c5e6b6b2089.jpg

Sorry to disappoint you but this layout will have very high latency and the others you made.

You want the path to the IO and between dies to be as short as possible so chips that are far from the controller will have latency issues which i expect to be the bottleneck of this layout.

 

A PC Enthusiast since 2011
AMD Ryzen 5 2600@4GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2040MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites
4 minutes ago, Vishera said:

Sorry to disappoint you but this layout will have very high latency and the others you made.

You want the path to the IO and between dies to be as short as possible so chips that are far from the controller will have latency issues which i expect to be the bottleneck of this layout.

 

I was actually the one who made that pic (somehow the quote name tags must have glitched), but maybe that could be somewhat remedied with stacking dice?  (Maybe that would also allow RTX 3090-quantity core counts on Socket AM5, assuming it's the same physical size as and compatible with cooolers from AM4, at like 2nm or 1.4nm? :))

 

🥁🥁🥁Let the CPU vs GPU core wars begin! 🔦🔥 (Come on, when I look for "torch" in emoji database, I want a stick with fire, not a "flashlight".)

 

Link to post
Share on other sites

I've liked the use of the term "2nm equivalent". One way I've heard this explained to me is that since the advent of finFET and gate-around-source + atomic layer deposition, cutting edge fab development has essentially become 3D printing transistors out of atoms.

 

Thus it's more effective to think of each process size as being named after the equivalent size of 2D process node would have to be to put the equivalent number of transistors in a given unit squared.

 

this means that these process nodes are almost certainly much larger than seven nanometers on average, but we can just use height much more effectively than previously.

 

In fact I can't find it now but I believe it was a veritasium video that had a physicist straight up state it may be theoretically impossible to build a 2D transistor below about 7 nanometers in size due to quantum effects.

 

-----

 

now I'm of the opinion that quantum computing is not going to magically replace classical architecture anytime soon, if ever. that said our current understanding of how to build transistors may very well change and become more "quantum", in that they rely on constructive and destructive interference to produce effectively the same thing as what a transistor would.

F#$k timezone programming. Use UTC!

PC Specs:

Dell XPS 13 9370 (2018): Core i7 8550U, 16GB RAM, 512GB SSD + 1070 eGPU

(Portable Gaming Master Race)

Link to post
Share on other sites
14 minutes ago, PianoPlayer88Key said:

but maybe that could be somewhat remedied with stacking dice?

Die stacking is limited but a much better solution.

Here is a better layout (I know that some of the traces are disconnected,i was too lazy to fix that :D)

Zen4.png.e5ef77363fb73f4de08c73eeb08f61de.png

A PC Enthusiast since 2011
AMD Ryzen 5 2600@4GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2040MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites
2 hours ago, Brooksie359 said:

I was under the impression that smaller transistors result in lower power consumption so even if two processes have similar density the one with the smaller transistors would be more power efficient. 

Density and transistor size are pretty much two ways of measuring the same thing. If you have smaller transistors, you can fit more into the same area and so your transistor density is higher. This is why nodes of a similar density will perform similarly - to achieve the same density their transistors will be pretty much the same size. There may be some variance here, but that will also be dependent on other parameters of node design, such as the design of the individual transistors.

My PCs:

Quote

Timothy: 

i7 4790k

16GB Corsair Vengeance DDR3

ASUS GTX 1060 6GB

Corsair Carbide 300R

 

Link to post
Share on other sites
14 hours ago, Letgomyleghoe said:

you can just keep shrinking forever

By name yes, but technically no. 

 

https://en.m.wikipedia.org/wiki/Uncertainty_principle

 

And that's one of the main reasons why that is not possible. 

 

 

 

RYZEN 5 3600 | GIGABYTE 3070 VISION OC | 16GB CORSAIR VENGEANCE LPX 3200 DDR4 | MSI B350M MORTAR | 250GB SAMSUNG EVO 860 | 4TB TOSHIBA X 300 | 1TB TOSHIBA SSHD | 120GB KINGSTON SSD | WINDOWS 10 PRO | INWIN 301| BEQUIET STRAIGHT POWER 11 650W 80+ GOLD | ASUS 279H | LOGITECH Z906 | DELL KB216T | LOGITECH M185 | SONY DUALSHOCK 4

 

LENOVO IDEAPAD 510 | i5 7200U | 8GB DDR4 | NVIDIA GEFORCE 940MX | 1TB WD | WINDOWS 10 GO HOME 

Link to post
Share on other sites

Fun times, scaling down further with silicon, can't wait to see the final limits, probably another decade potentially. 

Ryzen 7 3800X | X570 Aorus Elite | G.Skill 16GB 3200MHz C16 | Radeon RX 5700 XT | Samsung 850 PRO 256GB | Mouse: Zowie S1 | OS: Windows 10

Link to post
Share on other sites

Were close to the end game here. We cant keep shrinking further at some point so I wonder what is gonna be next. Either were gonna see architecture changes instead of transistor changes or just a move to a whole different computing method 

Primary Laptop (Gearsy MK3): Ryzen 5 4600H, GTX 1650 (GDDR6), Vega 6 Mobile, 16 GB DDR4 2400 Mhz, 250 GB 960 Evo NVME SSD, 1 TB WD Blue, 15.6 in 1080p IPS display 

2020 Acer Nitro 5 

 

Secondary laptop (Gearsy MK2): Ryzen 5 2500U, Vega 8 Mobile,12 GB 2400 Mhz DDR4, 256 GB NVME SSD,  15.6" 1080p IPS Touchscreen 

2017 HP Envy X360 15z (Ryzen)

 

PC (Gearsy): A6 3650, HD 6530D , 8 GB 1600 Mhz Kingston DDR3, Some Random Mobo Lol, EVGA 450W BT PSU, Stock Cooler, 128 GB Kingston SSD, 1 TB WD Blue 7200 RPM

HP P7 1234 (Yes It's Actually Called That) 

 

Useless Chrome Machine (Blanny): Celeron N3060, HD 400, 4 GB DDR3, 32 GB EMMC, 768p TN Display

2016 HP Stream 11.6 in

 

Switch Lite: Turquoise model, 64 GB Samsung Evo SD card

 

Also im happy to answer any Ryzen Mobile questions if anyone is interested! 

 

 

 

 

 

 

 

Link to post
Share on other sites
15 hours ago, poochyena said:

I thought we'd need new materials to get much smaller than current processors.

I believe the limit of silicon is actually around 1nm or so, the size of 1 silicon atom is about 0.2nm. So if 2nm is actually 2nm, then we're getting close to the limits of silicon.

Link to post
Share on other sites
5 hours ago, AndreiArgeanu said:

I believe the limit of silicon is actually around 1nm or so, the size of 1 silicon atom is about 0.2nm. So if 2nm is actually 2nm, then we're getting close to the limits of silicon.

Split the atom

Spoiler

boom?

 

yeet!

Link to post
Share on other sites
9 hours ago, Silentprototipe said:

Were close to the end game here. We cant keep shrinking further at some point so I wonder what is gonna be next. Either were gonna see architecture changes instead of transistor changes or just a move to a whole different computing method 

Efficiency/architecture changes to squeeze way more performance out of existing limitations - I guess the Apple M1 is maybe a forerunner of this. If ARM and good optimization can make such improvements, then this is another solid decade of performance gains.

 

After that... yeah, I have no idea either.

I like cute animal pics.

Way too much hardware to list it here. Just check my profile.

Link to post
Share on other sites
11 hours ago, Mark Kaine said:

By name yes, but technically no. 

 

https://en.m.wikipedia.org/wiki/Uncertainty_principle

 

And that's one of the main reasons why that is not possible.

Eh... ish?

 

Heisenberg's uncertainty principle asserts that there's a fundamental limit to how accurately we can measure two linked quantities - in our case that would be the position and velocity - of a given particle. That's all it says. And that fundamental limit is tiny - the uncertainty of the two measurements, when multiplied together, must be larger than 5x10-35. For reference, 1nm is 1x10-9. The uncertainty of pretty much everything that humanity has ever created has been many orders of magnitude higher than that limit. Sure in theory we could construct a transistor so small that the accuracies required to do so would be butting up against that limit, but it would never actually work as a transistor. There are other quantum effects that will come into play before then.

 

Generally speaking Quantum Tunnelling is considered the lower limit on how small transistors can be made. The uncertainty principle can be used to describe this mechanism, but it can also be explained more generally by using the properties of the wavefunction itself - how quantum states (and by extension particles) are described in quantum physics. Wavefunctions describe quantum states as probability distributions - the particle it describes is not in a determinate place, rather we have a probability distribution saying where it is likely to be. You can think of it as a bell curve. This curve can overlap an obstruction, and can continue through the other side of it if the obstruction is narrow enough. If it does so, then there exists a probabilty that the particle can be found on either side of the barrier - aka the particle can pass through the barrier.

 

So if that barrier is a transistor gate and the particle an electron, then the electron can skip past the gate if it's narrow enough, causing current leakage (bad). Too much of this and you don't have a transistor anymore, as the difference between an open and closed transistor is too small. This occurs in transistors today, but isn't a huge deal. By the time we get to transistors with features of ~4nm though, we basically reach a limit. (Feature size != node name!! The name of the node is far smaller than any feature of the transistors it uses!)

 

Quote

 

Were close to the end game here. We cant keep shrinking further at some point so I wonder what is gonna be next. Either were gonna see architecture changes instead of transistor changes or just a move to a whole different computing method 

Not necessarily. One idea is to use Tunnel Field Effect Transistors (TFET), which use quantum tunneling, rather than heat, to open and close the transistor gates. This could potentially reduce the operating voltage to ~0.1V and reduce power consumption by ~100x. Unfortunately they're still in the design phase, but progress is being made.

My PCs:

Quote

Timothy: 

i7 4790k

16GB Corsair Vengeance DDR3

ASUS GTX 1060 6GB

Corsair Carbide 300R

 

Link to post
Share on other sites
2 minutes ago, tim0901 said:

Not necessarily. One idea is to use Tunnel Field Effect Transistors (TFET), which use quantum tunneling, rather than heat, to open and close the transistor gates. This could potentially reduce the operating voltage to ~0.1V and reduce power consumption by ~100x. Unfortunately they're still in the design phase, but progress is being made.

🤨 Ok, but how cold will that theoretical chip have to operate at? -200 C or something? If so, that's going to annihilate your power savings. 

Link to post
Share on other sites
On 11/16/2020 at 5:41 PM, Letgomyleghoe said:

*cough cough* ARM *cough cough* 

i mean arm isnt a competitor afaik they dont have their own fabs. they are more of a customer

Link to post
Share on other sites
10 minutes ago, tim0901 said:

Eh... ish?

Tldr: it is a limiting factor, and just because the theoretical limit is smaller, doesn't mean we can reach it with conventional (less than ideal and if we're honest, outdated) tech. 

 

Is what you wanted to say, I believe? ;)

 

 

RYZEN 5 3600 | GIGABYTE 3070 VISION OC | 16GB CORSAIR VENGEANCE LPX 3200 DDR4 | MSI B350M MORTAR | 250GB SAMSUNG EVO 860 | 4TB TOSHIBA X 300 | 1TB TOSHIBA SSHD | 120GB KINGSTON SSD | WINDOWS 10 PRO | INWIN 301| BEQUIET STRAIGHT POWER 11 650W 80+ GOLD | ASUS 279H | LOGITECH Z906 | DELL KB216T | LOGITECH M185 | SONY DUALSHOCK 4

 

LENOVO IDEAPAD 510 | i5 7200U | 8GB DDR4 | NVIDIA GEFORCE 940MX | 1TB WD | WINDOWS 10 GO HOME 

Link to post
Share on other sites
2 minutes ago, spartaman64 said:

i mean arm isnt a competitor afaik they dont have their own fabs. they are more of a customer

i want saying they are a competitor, just bringing up the point that ARM could enter the consumer space at some point.

 

and AMD doesn't have their own fabs either. 

Quote me for a reply, React if I was helpful, informative, or funny

 

AMD blackout rig

 

cpu: ryzen 5 3600 @4.4ghz @1.35v

gpu: rx5700xt 2200mhz

ram: vengeance lpx c15 3200mhz

mobo: gigabyte b550 pro 

psu: cooler master mwe 650w

case: masterbox mbx520

fans:Noctua industrial 3000rpm x6

 

 

Link to post
Share on other sites
3 minutes ago, StDragon said:

🤨 Ok, but how cold will that theoretical chip have to operate at? -200 C or something? If so, that's going to annihilate your power savings. 

Unlikely, they aren't taking advantage of suprtconductivity or anything. In a paper I've found they're testing their device between 223 and 323K (-50C - +50C) which suggests it should work at room temperature. But as I mentioned these are still very much in the research and development stage so a lot could still change.

My PCs:

Quote

Timothy: 

i7 4790k

16GB Corsair Vengeance DDR3

ASUS GTX 1060 6GB

Corsair Carbide 300R

 

Link to post
Share on other sites
3 minutes ago, Mark Kaine said:

Tldr: it is a limiting factor, and just because the theoretical limit is smaller, doesn't mean we can reach it with conventional (less than ideal and if we're honest, outdated) tech. 

 

Is what you wanted to say, I believe? ;)

I believe what i meant to say is that everyone should spend way more money on science so we can figure out alternatives so we never have to reach this limit. 😜

 

No I am totally not biased in this opinion at all...

My PCs:

Quote

Timothy: 

i7 4790k

16GB Corsair Vengeance DDR3

ASUS GTX 1060 6GB

Corsair Carbide 300R

 

Link to post
Share on other sites
20 hours ago, Vishera said:

Die stacking is limited but a much better solution.

Here is a better layout (I know that some of the traces are disconnected,i was too lazy to fix that :D)

 

the issue i can see with that is how are you going to cool that? essentially the dies on the bottom are not making contact with the IHS and cooler at all

maybe you can have some weird cpu layout where there are cores on both sides and you just need to get two cpu coolers LUL

Link to post
Share on other sites
Just now, spartaman64 said:

the issue i can see with that is how are you going to cool that? essentially the dies on the bottom are not making contact with the IHS and cooler at all

maybe you can have some weird cpu layout where there are cores on both sides and you just need to get two cpu coolers LUL

I used a Threadripper package for it so the CPU should have no problems with a TRX40 cooler.

A PC Enthusiast since 2011
AMD Ryzen 5 2600@4GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2040MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites
1 minute ago, Vishera said:

I used a Threadripper package for it so the CPU should have no problems with a TRX40 cooler.

Not the point. If you stack multiple sets of CPU cores on top of each other, then the cores at the bottom aren't touching the IHS. Instead they're covered by another layer of blisteringly hot cores - they have nowhere for the heat they're generating to go.

 

Die stacking so far (eg foveros) has lower powered components such as PCIE and usb controllers at the bottom, with the hot CPU cores on the top to maximise their ability to be cooled effectively. You can't just go stacking CPU cores on top of another without releasing the magic blue smoke.

My PCs:

Quote

Timothy: 

i7 4790k

16GB Corsair Vengeance DDR3

ASUS GTX 1060 6GB

Corsair Carbide 300R

 

Link to post
Share on other sites
1 hour ago, tim0901 said:

Not the point. If you stack multiple sets of CPU cores on top of each other, then the cores at the bottom aren't touching the IHS. Instead they're covered by another layer of blisteringly hot cores - they have nowhere for the heat they're generating to go.

 

Die stacking so far (eg foveros) has lower powered components such as PCIE and usb controllers at the bottom, with the hot CPU cores on the top to maximise their ability to be cooled effectively. You can't just go stacking CPU cores on top of another without releasing the magic blue smoke.

You are right,I didn't think about that.

So instead of that AMD will need higher core density inside the CCDs.

A PC Enthusiast since 2011
AMD Ryzen 5 2600@4GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2040MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites
3 hours ago, Vishera said:

So instead of that AMD will need higher core density inside the CCDs.

 

More core density means more heat. basically you've got an upper limit on how many watts of power somthing with a given surface area can consume, the thicker it is, (i.e how high it sticks up off the substrate), the lower this is.

Link to post
Share on other sites
On 11/16/2020 at 5:32 PM, Random_Person1234 said:

It looks like we're approaching picometers. So much for 7nm/5nm being the limit.

TSMC's own people say that the node size doesn't actually have anything to do with the node size. In fact, the node size at that time will be at the highest spread against the claimed size since Moore first idealized IC growth patterns.

Manufacturers are using node size as a marketing term, with a very loose connection to transistor density. However, node sizes keep trucking down while actual transistor sizes are starting to plateau.

There are a plethora of links to reputable tech news organizations that also say this. Here are a few:

 

 

ENCRYPTION IS NOT A CRIME

Link to post
Share on other sites
6 hours ago, Vishera said:

You are right,I didn't think about that.

So instead of that AMD will need higher core density inside the CCDs.

One thing AMD possibly could do eventually is to place come core chips on top of the IO chip.

 

I did read somewhere that someone suggested the possibility to have L3 cache (or L4) in the IO chip, having the IO die be larger than it is now, and stack the core chips on top of it.

 

Personally I don't have knowledge to know of that is possible or not. Would doing that make the cache in IO chip be too slow to be L3 cache or not?

“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×