The Next 5 Years Of Chip Technology

Experts at the Table, part 3: The impact of reduced process variation; automotive chip reliability at advanced nodes; the impact of packaging.

popularity

Semiconductor Engineering sat down to discuss the future of scaling, the impact of variation, and the introduction of new materials and technologies, with Rick Gottscho, CTO of Lam Research; Mark Dougherty, vice president of advanced module engineering at GlobalFoundries; David Shortt, technical fellow at KLA-Tencor; Gary Zhang, vice president of computational litho products at ASML; and Shay Wolfling, CTO of Nova Measuring Instruments. The panel was organized by Coventor. What follows are excerpts of that discussion. To view part one, click here. Part two is here.


Seated panelists, L-R: Shay Wolfling, Rick Gottscho, Mark Dougherty, Gary Zhang, David Shortt. Photo credit: Coventor, a Lam Research company.

SE: If variation can be reduced, is there any impact on yield?

Dougherty: It’s really about the time it takes to get to ultimate yield. With an improvement in variation, if you look at any D0 (zero defect density distribution) or net D0, that’s topped out or bottomed out if you want to talk about a D0 step-down. But where the variation really comes in is with the time it takes to get to ultimate yield. There’s not more die productivity through reduced variation. It’s going to be necessary to achieve the same levels of yield, the same levels of effective D0, that we have over prior nodes.

Zhang: With reduced variation, parametric distribution will be tighter. You’ll get a better parametric yield to manufacture high performance designs, which requires tight distributions in both leakage and speed and commands a price premium.

Dougherty: It’s part of how we’re going to get to the levels of performance that are required. That’s how you scale. If you can have less variation, that will help you to scale. But if you think about it from a net die out per wafer, it’s getting you to the same point at each node. It’s just that the path you take to achieve it has to come more from variation. Process variation is certainly one of the biggest sources of noise and nuisance that we have today.

Shortt: We couldn’t see the particles that we see today if the surface roughness coming out of wafer manufacturers was the same as it was 20 years ago. So they’ve had to reduce that surface roughness in order to make the process work. We’ve been able to ride the coattails of these tightening requirements on things like line-edge roughness or waviness and other parameters. But my general feeling is that we’re running up tighter against those limits than we were 10 or 20 years ago. Process variation has become more of a problem. Our ability to resolve defects with optical inspection are much smaller than the point spread function of the optics. We’re relying on very slight grade level changes in these images to detect defects. Any kind of process variation can throw that off. Hence, we have a lot of algorithms to compensate. But just to make these devices, the parameters have to get tighter.

SE: One of the ways around variation has been to build margin into designs. As we reduce variation, can we start reducing the margin in design, which at 7nm and below affects performance, power and yield?

Zhang: You can accommodate more customer designs if you can deliver a process technology that supports a larger design space by controlling process variability. That’s a competitive advantage for foundry. On the other hand, by restricting designs you can take full advantage of the process sweet spot. There are two options here. It really comes down to the performance requirement. Design and process technology co-optimization (DTCO) is adopted in logic and foundries to get the best out of the process capabilities and reduce design-spins. We have a source and mask optimization product that’s been used by customers to co-optimize design and litho including scanner and mask OPC.

Gottscho: As we tighten things up, it ought to allow a relaxation of the design rules, which should improve yield. But it also will help propel the technology to the next node. Also, we’re not talking about it from a variation reduction standpoint, per se. Simulation is hugely powerful in running virtual experiments, and as we get more accurate with those models we can see where the hotspots are, where the weaknesses are, and that may be where you have to go back to the guys designing the chips and say, ‘That’s not right.’ Maybe we go back to the process guys and say, ‘You’ve got to open up the window because you’re generating that hotspot.’ Variation reduction coupled with more accurate rapid simulation can help ramp to yield and to get to the next node.

SE: Is the proliferation of process nodes and nodelets affecting variation? Is there enough time between these different process tweaks to properly address it?

Dougherty: You could have a long discussion about node naming and numbering and whether that even means anything anymore. From a foundry manufacturing standpoint, it’s driven by customer need. So when you look at variants or derivatives, of which there are many, that’s all tied to customer demand. There’s a market space and place for something that performs a certain function in a certain way for certain cost and power. That’s what’s driving our roadmap. As for the challenge of how much time you have to work on the variations, it’s assumed that you will find a way. For everyone it’s all about time to revenue for a given application. If we can speed up that cycle to prevent the decay of everything slowing down, that’s to everyone’s benefit because there’s a revenue opportunity tied to it.

Gottscho: I don’t know how you get everybody in lock step to slow down. The beauty of Moore’s Law is that everyone is in lock step to move forward, and there are always some people who are going to move faster and some who are going to move slower. But the market pull for more advanced chips is insatiable, and the guy with a solution first is going to reap most of the benefit.

SE: But there is a difference. With a mobile phone, you could deal with issues that cropped up in the next generation. With AI systems in automotive or industrial automation, these chips are supposed to last for 10 to 15 years. Variation plays an important role here. Do we have to think about how we approach different markets with technology, and does that affect all technology?

Dougherty: There’s a body of knowledge that exists out there. For our company, if you look at the different customer segments, there are some that have always required a very high level of reliability. Automotive is a burgeoning driver of that now.

SE: But it was never at the most advanced nodes, right?

Dougherty: Yes, that’s true, and that has changed. So there will be a lot of pressure around how to demonstrate reliability, and variation can take the form of reliability. Traditionally this has been a follow-on or second-generation approach for a technology node. You would qualify the base technology, get it out in the market for handheld or consumer electronics, and then follow on with another qualification for automotive-level reliability. That definitely is being compressed. How to confront that is an open question. The market requires what the market requires.

Gottscho: Do you think that with the advent of AI, chips can detect defects themselves?

Dougherty: I sure hope so. There is some thought there. The industry usually has to capitalize on what we ultimately produce. So we have to figure out how to take the capabilities we’re creating and funnel that back through cognitive or machine learning to help drive that learning faster. In my view, that’s largely untapped. There’s a lot of buzz about it. Everyone is involved, to one degree or another, and each has a strategy. But that hasn’t come to fruition yet.

Wolfling: Mark (Dougherty) is talking about automotive at the advanced nodes. Do you see this as one solution, or is it part of the diversity at advanced nodes of different flavors for different markets?

Dougherty: Yes, it’s for different markets, but the basic backbone is the same. There are different features and requirements, and certainly different liability requirements. But the base technology is the same. As a result, while we’re designing and developing, that has to be kept in line and part of the requirements. It’s no longer just a two- or three-node challenge.

Shortt: In memory, the requirements are less severe because we can do derivatives and rely on redundancy to overcome problems. Do you see that happening in logic? With a self-driving car, if you have a family on the highway, the most important thing is that it must not fail. And if it does fail, it must fail gracefully. So it’s one thing if it fails and the car pulls over to the side and you have to replace the computer. But if it fails in a way that it causes an accident, that’s not acceptable. Do you think there can be redundancy built into the logic circuits?

Gottscho: I would have thought there’s already redundancy in automotive electronics. Even today, if you have a computer running the engine, there’s some degree of redundancy. It may not be within the chip, but there are duplicate systems. An extreme example is the space shuttle, which had four identical computers running exactly the same software. They’d vote on each other and throw out any one of those that disagreed with the others because they couldn’t fail.

Dougherty: Yes, and it’s more likely to be system-level redundancy and less processor redundancy.

SE: So if we do manage to tighten variation, does this at least buy us redundancy.

Wofling: This goes back to the question of what is an extra node. Is it a full node? An extra node of scaling? Or is it an extra node of performance? Is it an extra node of 3D scaling? The industry goes both ways. So you extend finFETs, but you’re still working on nanosheets. History tells us that each technology continues to be extended more than you think in the beginning, and it will likely happen here, as well. So there will be a 7nm and a 7+, and a 5nm and a 5+. It will take a long time, but people will see 3nm coming after 5nm. The price of changes going from DRAM to MRAM, or finFET to nanosheet, is very high. It’s much more cost-effective to find an evolutionary solution, even if it’s more challenging.

Gottscho: You can’t really do a controlled experiment. It’s hard to imagine that the industry is going to stop development for moving from a finFET to a nanowire technology and instead just focus on variability reduction so they don’t have to make that transition. All of these things are happening in parallel. We’re all trying to move as fast as we can and squeeze down the variability at the same time we’re trying to enhance the electrostatics on the device. There is probably a node’s worth of capability by squeezing down variability, and maybe even two, but I don’t know how you measure that in the end because all of these things will happen in parallel.

Zhang: Out of a given technology node in terms of ground rules you can deliver low-power, high-density parts, and you can also make high performance parts with certain design and process enhancements. The problem is that when you move to high performance, you’re going to have to sacrifice area, which means your cost goes up. The challenge is to optimize performance and cost at the same time. Scaling is still the best way today to get both performance and cost.

SE: One way to add flexibility is with advanced packaging. What impact does that have on all of these decisions?

Dougherty: It comes back to working things in parallel. If you look at through-silicon vias, and 2.5D and 3D, it becomes a very application-specific question. It won’t obviate the need to scale at the die level, but depending on the solutions that the end customer is looking for, it opens up a lot more possibilities there. There is certainly the case of marrying logic with DRAM, or one technology generation with another. All of those things are happening. But it more will be driven by the application space. I don’t see it as a way to buy back freedom.

Wolfling: There are two requirements. You have to scale down the transistor, and now you have the requirement for 3D integration. To get to a cost-effective solution, you probably will need both of them.

Gottscho: Advanced packaging is more for performance reasons or power reduction reasons and form factor more than cost. It doesn’t displace scaling, and trying to get higher density at the chip level. It’s complementary, and both will keep going. It certainly doesn’t replace the shrink approach to scaling.

Zhang: Advanced packaging provides a way to deliver system-level performance for a higher cost. That’s good for certain applications, but not for all. Scaling at the chip level is the universal path to better preformance and lower cost.

Related Stories
The Next 5 Years Of Chip Technology Part 1
Experts at the Table, part 1: Scaling logic beyond 5nm; the future of DRAM, 3D NAND and new types of memory; the high cost of too many possible solutions.
The Next 5 Years Of Chip Technology Part 2
Experts at the Table, part 2: What are the sources of variation, how much is acceptable, and can it be reduced to the point where it buys an extra node of shrinking?
New Nodes, Materials, Memories
What chips will look like at 5nm and beyond, and why the semiconductor industry is heading there.



Leave a Reply


(Note: This name will be displayed publicly)