Jump to content

Ramblings on the future of transistor technology

I'm putting some thoughts I had out to see if someone else has a different opinion, and I'm assuming anyone reading this already understands how chips are made + terms.

 

Its readily apparent that to advance to further nodes some more exotic technologies are going to be put into practice. 22nm by Intel saw the commercial introduction of FinFets, which was then adopted by TMSC and Samsung for 16/14nm nodes. 

Samsung is planning for GAA for 3nm, TMSC and intel to follow on 2nm/20A. Thats just ~6 nodes before moving into a new transistor geometry vs the 10's of nodes on planer geometries. Advancements in material science, lithography, mask creation and other fields are slowing down. Brute forcing density is quickly becoming un-economical, transistors are more expensive per $ at advanced nodes vs previous nodes which is a reversal of decades of cheaper transistors every node advancement.

 

Planar-> FinFET -> GAA is a pretty natural procession as you increase gate control over the transistor channel in more dimensions. What comes after? Most speculation shows packaging is the area the semi majors will and are going to innovate in. GAA refinement will fizzle out in 3-4 nodes and switching to a new semiconductor currently not in the cards as far as the industry can see. Most other semiconductors aside from silicon are either, too expensive or not as balanced as silicon in mobilities (GaN has poor n-type mobility but superior p-type) in making CMOS (complementary) logic circuits. Logic families that can work off of non-CMOS semiconductors are less energy efficient (dynamo logic). Solutions to these problems aren't anywhere near commercial viability vs what silicon would be able to do once we start needing a better semiconductor. Lastly other semiconductors are hard to create large enough wafers, cheaply, for existing equipment. 

 

What kind of packaging will the semi majors look into? We already see the rapid expansion of multi die System on Interposers, AMD with chiplets, Intel with tiles, Apple came out with the M1 Ultra which is bonding 2 M1 Max's, similar to Intel tiles. This paradigm would see advancements in interposer assembly for more wires in the interposers so more functions can be communicated across the interposer. AMD is also looking at die stacking with VCache, but thermal limitations are a problem. Heat escapes out of the bulk side of a die and bonding two dies together means one of the dies has to face toward the motherboard/PCB/etc. So, the motherboard facing die probably has to be minimal heat generating or systems need to be redesigned so thermal systems can mount heatsinks to both sides of a pcb. Die stacking also reduces yields since two passing dies become failed dies if you mess up bonding the two dies because of pad misalignment. And afaik aligning pads is an issue so pads can't be too small. Which also limits i/o but for some applications you can get away with it.

 

Something no fab company has taken a stab at yet is borrowing from NAND flash processes to create multi transistor layer dies. But while this works for NAND flash, it has many issues for logic circuits. For one you can't grow good mono-crystalline silicon on top of the mess of metal and dielectric that makes up the wiring of a silicon die. Two, heat issues, that transistor layer will be surrounded by insulators and will quickly hit 100+ C if it's too dense. Now you might say GAA processes need floating silicon channels so they must need to grow silicon to do that. However, what probably happens is good mono-crystalline floating channels are cut out of the bulk silicon layer vs being grown.

 

Overall, I see in the next 2-3 decades at earliest we'll run out of major ways to pack transistors more densely, with favorable economics at the package level and the nature of computing will shift to more ASICs or DSICs (Domain specific) products. Or we throw out the modular systems we have today, and everything becomes an SOC or SO (system on interposer). So, upgradability and replaceability will have to be pushed aside if you want more performant/efficient computing for a given cost.

 

A very unlikely option is Fabs start to mass implement ion beam technology for commercial manufacturing, slower but if the machines get cheap enough it could workout. Ion-beam tech is used for research purposes to prototype advanced transistor geometries but is unsuitable for mass wafer level manufacturing atleast for now...

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think shrinking transistors is the future of computing. As you point out, the future seems to be in designing better ways to use the transistors, effectively making ASICs rather than general purpose silicon (or carbon or whatever they try using in the future.) The approach Nvidia is using, with CUDA, RT, and Tensor cores for different purposes, seems likely for the future of computing. I imagine eventually Intel's P and E cores will start to feature unique functionalities that give them individually advantages in specific applications, and there will be a greater diversity of core types on CPUs.

 

However, I don't know that this will be the end of modularity. It could be, but it doesn't have to be.

 

If these ASIC-like chips are designed for interoperability, so that you can use parts from different companies, then perhaps we'll actually see more modularity in the future. It all depends on the specifics of how the performance gains are achieved, but if the new chiplet interconnect standard is a guide to the future, we may instead see motherboards starting to look like they did in the 80s, with slots and sockets for all sorts of different modules. Probably these slots would have to be closer together, although perhaps fiber optic connections could be used to minimize latency.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, YoungBlade said:

I don't think shrinking transistors is the future of computing. As you point out, the future seems to be in designing better ways to use the transistors, effectively making ASICs rather than general purpose silicon (or carbon or whatever they try using in the future.) The approach Nvidia is using, with CUDA, RT, and Tensor cores for different purposes, seems likely for the future of computing. I imagine eventually Intel's P and E cores will start to feature unique functionalities that give them individually advantages in specific applications, and there will be a greater diversity of core types on CPUs.

 

However, I don't know that this will be the end of modularity. It could be, but it doesn't have to be.

 

If these ASIC-like chips are designed for interoperability, so that you can use parts from different companies, then perhaps we'll actually see more modularity in the future. It all depends on the specifics of how the performance gains are achieved, but if the new chiplet interconnect standard is a guide to the future, we may instead see motherboards starting to look like they did in the 80s, with slots and sockets for all sorts of different modules. Probably these slots would have to be closer together, although perhaps fiber optic connections could be used to minimize latency.

 

We could see with the rise of chiplets and specialty cores physically larger CPU's wouldn't you think? As that would reduce lag concerns since it would be on the die and the larger area would allow for better heat management I would think. Modules could still show up but think they would be for very specialized tasks (What I don't know but things like game physics come to mind.)

"The Codex Electronica does not support this overclock."

Link to comment
Share on other sites

Link to post
Share on other sites

quantum everything, even my dog!

Quantum filter, quantum calculation, quantum mother of god!?

Quantum air, Quantum light.

 

Do wonder if they will continue with the light chips or if that will become a consumer thing as maybe an extra chip or board.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×