Jump to content

Semiconductor Crossroads: Where Do We Go After 10nm?

https://www.semiwiki.com/forum/content/5533-asml-imec-euv-progress.html

http://semiengineering.com/euv-cost-killer-or-cost-savior/

http://semiengineering.com/gaps-remain-for-euv-masks/

http://semimd.com/blog/2015/10/02/spie-photomask-panel-money-is-an-issue/

 

The semiconductor industry unfortunately has a very bitter pill sitting in front of it which may have to be swallowed. In spite of the progress made in the field of Extreme Ultra-Violet Lithography, the next-gen semiconductor production technology is nowhere near ready for deployment, and deadlines are approaching at speeds that don't seem viable. And what's more, this technology was only expected to last for 3-4 node shrinks at most, and that was for the original phase-in goal of 10nm.

 

For reference, the reason current lithographic technologies are at their limit is quite simple: light resolution. The current etching technology uses 193nm light. How do we get "14nm" circuitry you ask? Mirrors, and special smoke and lenses and masks and multipatterning. In simpler terms? Money. Lots of money paying lots of engineers to support legacy hardware to the bitter end and buy time. This is why the semiconductor foundry industry has been reduced to just 4 major players with fat pocketbooks: Global Foundries, Taiwan Semiconductor Manufacturing Company (TSMC), Intel Corporation, and Samsung Semiconductor. These four huge companies are responsible for the vast majority of semiconductor production around the world. Recently even the titan IBM has pulled out of this business due to rising costs and lack of profit margins. So why does EUV solve so many problems? That light is at a much more...native resolution you could say, of 13.5nm. It 's much easier to shape light in small degrees and proportions compared to what we force 193nm light to do these days. The problems lay in generation for long periods of sustained time and having "capture" hardware that can withstand Ultra-Violet lasers for sustained periods of time.

 

But, all the money in the world cannot defy the laws of physics; and unfortunately that means numerous attempts to find viable solutions to persistent aforementioned problems have been expensive at best and financial black holes at worst. In 2012 alone Intel invested 4.1 billion USD into ASML to help accelerate research for 450mm wafer tooling and EUVL. Fab 42 to this day (Intel's only 450mm plant, meant to build 14nm products) remains dormant, and EUV isn't on track to meet anyone's deadlines. http://www.bloomberg.com/news/articles/2012-07-09/intel-agrees-to-buy-10-stake-in-asml-for-about-2-1-billion

Samsung donated another ~1.4 billion to this in the same year, and the money has only continued to pile up and burn. https://www.asml.com/asml/show.do?ctx=5869&rid=46974

Quote

In theory, EUV will simplify the patterning process. With 193nm immersion/multi-patterning, there are 34 lithography steps and 60 metrology steps at 7nm, according to Peter Wennink, president and chief executive of ASML. This compares to just 6 lithography steps and 7 metrology steps for 28nm.

 

With EUV, there are just 9 lithography steps and 12 metrology steps at 7nm, Wennink said. Even so, chipmakers still will require both EUV and multiple patterning at 7nm and beyond.

For reference, we already have Intel and others using multiple patterning at 14nm, and keeping production costs down and yields up has been a struggle since 22-20nm. 10nm was never going to be easy after we saw the stall in Intel at 14nm, but while the dogs are starting to go down, the world has come around ready to start kicking. In terms of cost reduction, there's just no way to do it even now. Sure the number of production steps may decrease moderately from what we have currently(remember 14nm requires about 1/3 the steps that 7nm does for 193nm Liquid Immersion Patterning), but the equipment even now has lifetimes far too short and fidelity just not up to snuff.

 

Quote

A diagram was shown of the tin droplet generator for the EUV laser. Tin is loaded into the generator and melted, an inert gas is used to pressurize the reservoir and the tin then flows through a filter into the nozzle. Droplets are generated at a rate of 50KHz and fed to the CO2 laser. In Q3-2015 run time was ~100 hours, in Q3-2015 the second generation systems reached ~200 hours, a generation 3 generator at ASML is currently running ~900 hours (>1 month). The time to swap out a droplet generator has now been reduced from 14 hours to 8 hours.

The collector mirror lifetime has been improving:

  • 40 watts, 100% reflectivity to 50% reflectivity after 60 gigapulses.
  • 80 watts, 100% reflectivity to 50% reflectivity after 80 gigapulses.
  • 125 watts, 100% reflectivity to 60% reflectivity after 100 gigapulses.

For reference, a standard 193nm power source (already at 150-180W) used for something like Samsung's or Intel's 14nm processes lasts about 3000 hours, and the replacement time is about 4 hours since the instruments are much less heavy and complexly interconnected. And the mirrors can tolerate many more imperfections and do not wear out nearly as quickly. Further, the mask infrastructure and supply lines (think of these as inverted maps which reflect and refract light onto the wafer in order to make the correct etchings, and bear in mind the sort of material must stand up to what is essentially an ultraviolet laser for extended periods of time) show no signs of improvement. Where production times fall, facilities have to be built modularly at every step to keep the lines flowing when machine parts inevitably fail. Integrating what many consider a necessity to progress is proving to be a logistical nightmare. Beyond that, the tools and components will be much more expensive and die faster. The final nail? Neither ASML nor IMEC have projections which Intel believes will have the tech ready before 2019. At that point an additional 1-1.5 year stall to retool facilities will have officially killed Moore's Law without an intervening miracle. The choices beyond that point are all unpleasant in their own ways. AMD already employed HDL at 28nm for both GPU and CPU silicon. Whether we see other foundries follow suit will be borne out soon enough.

 

Late last year IBM demoe'd a 7nm wafer built using 4-deep multiple patterning and experimental EUV on Silicon-Germanium using both FinFET and FDSOI design principles, supposedly an experimental Power 9 or maybe even Power 10 chip design, emphasizing progress may not be possible in upcoming years unless a massive breakthrough is made. The industry is on-edge, and other options for innovation are now being considered which no one likes. Intel has already said clock speeds will have to necessarily fall for it to increase power efficiency and stay in-line with ARM's improvements, but what will this entail?

 

Most don't disclose too much about their Silicon processes, so perhaps one or more of these options has already been used.

  • Fully-Depleted Silicon Techniques (Samsung is employing these in their FinFET-based 14nm LPP process which GloFo will use for AMD's production)
  • Silicon-On-Insulator (FDSOI support from many companies)
  • 3D FinFET (Samsung, Intel)
  • FinFET Silicon on Insulator (IBM)
  • Quantum Well Transistors (Experimental MIT, UC Berkeley, Intel)
  • Tunnel FET (Experimental Japan and South Korea)
  • Spintronics (Experimental various)
  • New Materials (Silicon-Germanium at SUNY, Albany with IBM and GloFo, Indium-Gallium-Arsenide at Samsung, III-V Materials at various locations, carbon nanotubes at IBM and MIT, and graphene at MIT and UC Berkeley)
  • Die Stacking (I really don't see this as viable outside of memory technologies for thermal reasons primarily, but maybe I'm wrong)

 

Clearly innovation will have to continue once we reach 10nm, but it seems like an industry stall is inevitable due to lithography technology failing to have a significant breakthrough in the last five years. How each foundry addresses this stall will be pivotal to the survival of each. In this age of aiming to reduce power consumption, my money is on FDSOI first given the infrastructure is already in place. Beyond that I just don't know enough to say with any significant degree of confidence.

 

 

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I'd link the post on this forum, but the OP was quite slack so here, bio computers:

http://www.babwnews.com/2016/02/biological-supercomputer-creates-astonishing-gains-in-efficiency/

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Dabombinable said:

I'd link the post on this forum, but the OP was quite slack so here, bio computers:

http://www.babwnews.com/2016/02/biological-supercomputer-creates-astonishing-gains-in-efficiency/

very limited applicability, hence no mention.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, patrickjp93 said:

very limited applicability, hence no mention. And perhaps more importantly, way to be a patronizing dick as the first response in the thread. So good example. Much proud of you... >:(

The OP in that thread only posted the link.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Dabombinable said:

The OP in that thread only posted the link.

Oh, I thought you were referring to me... Well, that's 2AM local time for you... O.o

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, patrickjp93 said:

Oh, I thought you were referring to me... Well, that's 2AM local time for you... O.o

I'll link the post anyway, but yeah....this is slack:

Your topic on the other hand is far from slack.

 

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think I have ever seen a post from you, Patrick.  Like a topic.  This is incredibly detailed, and beautiful.  Well-done, man.  Well done.  So, with all this in mind--when can I get my 6900k? :_:

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, SurvivorNVL said:

I don't think I have ever seen a post from you, Patrick.  Like a topic.  This is incredibly detailed, and beautiful.  Well-done, man.  Well done.  So, with all this in mind--when can I get my 6900k? :_:

Uh, go back through my post history. I used to do 2-3 topics a week until the news from AMD and Intel slowed down back in... May ish?

 

A couple months. Someone on here said April 24th will be the launch to professional client studios with consumer launch to go out shortly after. Intel's official last word on it was June.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, patrickjp93 said:

Uh, go back through my post history. I used to do 2-3 topics a week until the news from AMD and Intel slowed down back in... May ish?

 

A couple months. Someone on here said April 24th will be the launch to professional client studios with consumer launch to go out shortly after. Intel's official last word on it was June.

Wow, so almost a year for your posts for the most part being frequent.  Well, at least the Broadwell-E stuff is soon.  Computex likely for all of it to be coming.  Still debating on a de-lidded 6700k with liquid metal or waiting for the 6900k.  OC'd eight core to 4ghz or more, or just getting a 6700k as high as I can on an H115i.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, SurvivorNVL said:

Wow, so almost a year for your posts for the most part being frequent.  Well, at least the Broadwell-E stuff is soon.  Computex likely for all of it to be coming.  Still debating on a de-lidded 6700k with liquid metal or waiting for the 6900k.  OC'd eight core to 4ghz or more, or just getting a 6700k as high as I can on an H115i.

That's college (and tech community flame war burnout) for you. if Magetank or Lawlz show up and choose to be spiteful, you'll see why I took a break.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

If you dont mind, could you explain all this in simpler terms? 

I thought it was that <7mm transistors are just a few atoms thick and hence it was difficult to manufacture and maintain

Link to comment
Share on other sites

Link to post
Share on other sites

The low hanging fruit has been picked. Now things are getting serious and it's going to slow down even more than it has. It's one reason for Kaby Lake. Intel can't move to their intended 10 nm process so have an interim chip to save the day. What Kaby Lake will do better than Skylake is an open question. It certainly looks like they're trying to spin reduced clock speeds but more "efficiency".

We're heading into very interesting times. There's not a lot of processor bound software out there. That means the easy gains of better hardware to make things run faster don't apply. Add to that the lack of benefits for a process node reduction as we saw with the move from 22nm to 14nm from Haswell to Skylake. Slightly better efficiency but not much in the way of speed increases with a lot more production headaches.

In the foreseeable future I can see architecture changes being the focus and not the node size. If Zen actually holds promise then it will be what it does instead of how it does it.

It's another variation on how Intel abandoned the GHz war when it hit that wall and went with cores instead. But it's also going to be that a 10 year old unit can be as effective as something new unless something in software can pull out a unique feature that's a killer app that will drive the adoption of new hardware. I don't see that happening anytime soon. Welcome to the mature years.

Sir William of Orange: Corsair 230T - Rebel Orange, 4690K, GA-97X SOC, 16gb Dom Plats 1866C9,  2 MX100 256gb, Seagate 2tb Desktop, EVGA Supernova 750-G2, Be Quiet! Dark Rock 3, DK 9008 keyboard, Pioneer BR drive. Yeah, on board graphics - deal with it!

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, RedRound2 said:

If you dont mind, could you explain all this in simpler terms? 

I thought it was that <7mm transistors are just a few atoms thick and hence it was difficult to manufacture and maintain

There's no point in designing architectures with transistors that small if your tools aren't capable of producing transistors that small. Photolithography uses high-intensity light to etch maps of circuits into a silicon substrate before copper and other materials are laid down in those etched pathways to create your interconnect and transistors. 193nm light has to be split, bent, and magnified many times to get down to a manageable size to use as an etcher, and the result is still very messy compared to using Ultraviolet techniques. This is why yields have been difficult to keep up at 14nm. Designing the chip is now arguably less difficult than getting antiquated tools to just go one step further.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Ah, nice to see to these richly detailed posts back. Very interesting for a future computer engineering major.

 

Anyways, if there is a significant delay at 10nm, a few things could happen:

 

1. AMD has time to truly and completely catch up to Intel on IPC (In third or fourth gen Zen)

 

2. Software will have to focus more on efficiency, like @Dimwitted said.

 

3. Prices fall across the board

 

As much as I love the newest, greatest tech, this delay could have some seriously positive outcomes.

I am conducting some polls regarding your opinion of large technology companies. I would appreciate your response. 

Microsoft Apple Valve Google Facebook Oculus HTC AMD Intel Nvidia

I'm using this data to judge this site's biases so people can post in a more objective way.

Link to comment
Share on other sites

Link to post
Share on other sites

Lithography might be over, but the age of asic chips are here.

 

Just like how the gpu was a specialized asic chip back in the day, new asic chips will be made to off-load certain tasks that cpus and gpus are just not good at.

This can be seen with the bitcoin craze. Asic machines were designed to mine bitcoins extremely fast. While the usefulness of an asic bitcoin mining chip are extremely limited, the potential for designing chips that are specifically made for one (or a couple of) tasks is huge.

There could be an asic chip for AI in video games, physics (although nvidia already took this one), and much more.

 

Although, I am probably wrong, it is an interesting idea to think about.

Chip designers will have to focus on key optimizations to make their chips faster, instead of the shrink in lithography. This can be seen with nvidia, and the 28nm process. They did not count on the lithography shrink, like AMD did. Instead, they focused on what they had, and optimized for the 28nm process. This put them significantly ahead of AMD. Not only were nvidia chips smaller in die size (and therefore cheaper to produce), they also performed better (specifically in terms of size to performance. Not price to performance.)

 

---sorry random tangents---

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, NoobsWeStand said:

Lithography might be over, but the age of asic chips are here.

 

Just like how the gpu was a specialized asic chip back in the day, new asic chips will be made to off-load certain tasks that cpus and gpus are just not good at.

This can be seen with the bitcoin craze. Asic machines were designed to mine bitcoins extremely fast. While the usefulness of an asic bitcoin mining chip are extremely limited, the potential for designing chips that are specifically made for one (or a couple of) tasks is huge.

There could be an asic chip for AI in video games, physics (although nvidia already took this one), and much more.

 

Although, I am probably wrong, it is an interesting idea to think about.

Chip designers will have to focus on key optimizations to make their chips faster, instead of the shrink in lithography. This can be seen with nvidia, and the 28nm process. They did not count on the lithography shrink, like AMD did. Instead, they focused on what they had, and optimized for the 28nm process. This put them significantly ahead of AMD. Not only were nvidia chips smaller in die size (and therefore cheaper to produce), they also performed better (specifically in terms of size to performance. Not price to performance.)

 

---sorry random tangents---

While that makes sense for SOCs on phones and tablets, unless Intel and AMD cooperate, including ASICs in the consumer space will make an already fragmented market more fragmented.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, patrickjp93 said:

There's no point in designing architectures with transistors that small if your tools aren't capable of producing transistors that small. Photolithography uses high-intensity light to etch maps of circuits into a silicon substrate before copper and other materials are laid down in those etched pathways to create your interconnect and transistors. 193nm light has to be split, bent, and magnified many times to get down to a manageable size to use as an etcher, and the result is still very messy compared to using Ultraviolet techniques. This is why yields have been difficult to keep up at 14nm. Designing the chip is now arguably less difficult than getting antiquated tools to just go one step further.

Aw okay, you were talking about the manufacturing part.

Thanks for the info

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Nup said:

Great info. Even for the old layman. 

I try...

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

@patrickjp93

How about die stacking with microtubes carrying liquid between die layers? Or nano-heatpipes moving heat from the underlying layers to the top layers, closer to the IHS.

 

sure, it would be insanely complicated and insanely costly. but anything would at this point it seems

Link to comment
Share on other sites

Link to post
Share on other sites

Breakthrough is definitely needed to advance and since last few years it wasn't anything major and there's Moore's Law there will have to be totally different materials and ways to create future hardware.

It's like batteries, they're terrible even though there are advances in efficiency, but you can may end up charging same batter of some device multiple times per day. That just needs to be changed, everything requires power and we use it constantly. So hopefully we see something groundbreaking like seconds fast recharge and multiple days last.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

A lot of the issue is materials technology. Finding new insulators. New substrates. New trace componentry. We're down to atoms here so exotica is the standard.

As for the asic question above, is that yet not another iteration of the old RISC/CISC issue? Different paths to the object that in the end were absorbed and didn't amount to diddly.

Sir William of Orange: Corsair 230T - Rebel Orange, 4690K, GA-97X SOC, 16gb Dom Plats 1866C9,  2 MX100 256gb, Seagate 2tb Desktop, EVGA Supernova 750-G2, Be Quiet! Dark Rock 3, DK 9008 keyboard, Pioneer BR drive. Yeah, on board graphics - deal with it!

Link to comment
Share on other sites

Link to post
Share on other sites

Well, this is a bummer. Altough most people would agree that our processors at 14nm are quite power efficient, ARMs ones at least, we do surely need a leap in performance for the near future, especially since mobile technologies and interconnected devices are constantly demanding more energy friendly components and litographies. These new material based technologies tough, such engineering complexity and monetary value, that blows away even the most well informed and industry following people, eh xd? Making the machines that build the brains of our machines better, seems like were trying to build an even larger and more powerful class-O star to handle our processor "solar system", lol.

Groomlake Authority

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×