The Next 5 Years Of Chip Technology

Experts at the Table, part 2: What are the sources of variation, how much is acceptable, and can it be reduced to the point where it buys an extra node of shrinking?

popularity

Semiconductor Engineering sat down to discuss the future of scaling, the impact of variation, and the introduction of new materials and technologies, with Rick Gottscho, CTO of Lam Research; Mark Dougherty, vice president of advanced module engineering at GlobalFoundries; David Shortt, technical fellow at KLA-Tencor; Gary Zhang, vice president of computational litho products at ASML; and Shay Wolfling, CTO of Nova Measuring Instruments. The panel was organized by Coventor, a Lam Research Company. What follows are excerpts of that discussion. To view part one, click here.


Seated panelists, L-R: Shay Wolfling, Rick Gottscho, Mark Dougherty, Gary Zhang, David Shortt. Photo credit: Coventor, a Lam Research company.

SE: Variation is becoming a bigger problem, and it’s not just about the process anymore. It’s also tool-to-tool variation. How do we manage it?

Zhang: The majority of customers today are still doing tool control to maintain an equipment baseline in the fab. We do have solutions and products for that. Then customers run process control on top of that. Up to this point, a large fraction of the variations and errors in CD and overlay are coming from the process side. The scanner contribution is relatively small in overall litho variability. But as we move to EUV, the design rules and control spec are tighter and the process contribution remains the major factor. The tool contribution can also play a sizeable role. So in terms of solutions out there, we need to reduce the process sensitivity to the tool parameters and variations. There are solutions we can make available to help customers minimize such sensitivity in the process development phase. For inline process control, we need to drive control all the way down to the wafer and even chip level. Wafer-to-wafer and within-wafer variations, which were considered not correctable in the past, now can be corrected thanks to scanner’s large number of control knobs. It is going to be dynamic and adaptive, and it requires an infrastructure of big data metrology and analytics and an automated dynamic correction and recipe management system. Such infrastructure also opens up the possibility of tool and process co-optimization, enabling more effective control of variations and maximizing fab utilization by removing constraints of tool dedication and routing restriction.

Dougherty: Tool-level control and matching always has been an issue as one of the subcomponents of process control or variability. The approach, for the most part, has been to take any individual component, whether it’s a tool or a given process step, and try to make that individual variation as small as possible. The idea was that if you did that well enough across every step, you got the outcome you needed. For the longest time, at a tool level, we would just keep banging and banging on chamber matching. And you would always reach the point where you just recognize this is the normal variation, and you have to account for it somehow with managing offsets or some other approach. But you need to layer that and understand all of the different sources of variation and permutations. You need to think of it across a set of chambers, and then you multiply that by the number of different process steps and figure out how those are all criss-crossing and the amount of variation. But when you understand the richness of data and the right metrology, you can achieve the right combination and correct for it in a thoughtful manner. History has been to try to control each step as well as you can, and then all the spec tolerances will somehow work out.

Wolfling: You need to add to that the impact of metrology. Now that you are able to do all of that, you need to be able to see something. There is no longer an absolute truth with metrology so that you know exactly what’s happening. There are various metrologies, various capabilities, and various permutations. You need to pull out of that some kind of an understanding of metrology. But you also have your own variation, which is coupled with the process variation. The key issue is how you take an abundance of information and make a correlation between the metrology and the process control. It’s not just matching of the tools. In some cases, you need to tailor the metrology to the exact processes and what you need to measure. You need to measure the right things to get control of process variability.

Dougherty: For total variation, you want less than 10% to be the metrology tool. We broke that rule a long time ago. How do you manage that? It’s another variable in the variation that you have to integrate into all of these things.

Wolfling: It does become one part of the variability. One of the ways to overcome that is use as much information as you can. It’s a different type of metrology. If you understand information from within the process, regardless of whether there is variability, you plug this into the measurement. So the measurement is no longer completely independent. It’s a black box. You don’t need to know anything about what’s happening. Just go ahead and take the measurements and give me the right data. In the same way, you say that you cannot optimize each step separately. You cannot separate the metrology from the processing in a black-box approach.

Shortt: Traditionally, we tried to make the measurement/inspection tools insensitive to any variation to get the same answer, regardless of whether there was variation, or somehow to compensate for it internally. So for a defect measuring system you might take a histogram of gray levels on a wafer and re-map that onto some standard. We’re moving into a world now where we can use that information to make some statements about the actual wafer processing and what’s happening, and feed that back and use that information. We’ve had various efforts over the years inside of our company to take some of that information and do something with it, but we haven’t had all the infrastructure in the past to feed that back. We’re getting into a world where that extra control and feedback loop is important.

SE: Have we got enough tunability in the tools themselves? Tolerances are coming way down as we move to the next node, so what used to be acceptable levels of variation are no longer acceptable.

Gottscho: Chamber matching and tool-to-tool matching has been going on for a long time. We’ve been matching better than half a nanometer for close to 20 years. But the pain, effort and time you spend to get to that number today is completely intolerable. It’s getting harder and harder. We have lots of knobs to tune. The bigger problem is actually one of metrology. Our approach has always been not to just beat down the variability in the unit process, but to break down the unit process—whether that is deposition, etch, or clean. We break down the system process by subsystem and look at the inherent variability in each element of the system. Then you can look at that composite set of subsystems, which make up the system, to give you inherent variability. But you can beat that, too. If you you can measure the variation within a subsystem with your tool, and you have a knob to calibrate out variation from one tool to another, you can get a higher degree of mixing. Ultimately, you come to a situation where that’s not good enough because there are non-linearities in trying to segment the system into subsystems, into crosstalk, and in the end you end up measuring product wafers. The question is how precisely you can do that with metrology—and how accurately, because you’re matching one chamber to another. And then, again, can you tune one system to another system. We do that ultimately by matching the recipes. But that approach is still valid. You don’t want to do this compensation until the very end, because if you start mixing big data too early you’re just going to narrow the process window. You’re going to be canceling one big effect with another. You have to drive it down to the smallest numbers possible, and then as a method of last resort go to compensation. The ability to compensate depends on metrology.

Wolfling: From a metrology perspective, one of the important things in addition to accuracy is getting the relevant metrology. So what is the actual parameter needed to compensate? It’s not just about getting an accurate measurement in angstroms. That’s not always relevant to comensate for the process variability. And again, it’s one of the cases where the process and the metrology are not completely separate. The metrology guys need to understand what is the relevant process parameter, and the process guys need to understand and to live to some extent with the limitations of the metrology.

Zhang: There is one challenge we need to consider before we implement cross-compensation between tools and processes. We know the impact is not only on a particular design pattern, or even a set of selected patterns, but on all the patterns on the full-chip level. They respond to certain tuning knobs or control actions quite differently. Whatever you do up front in terms of mask design and OPC to make sure all of the patterns accurately hit the target critical dimension, they may end up all over the place after cross-compensation. Patterns are no longer in sync on the full-chip level. This is a risk we need to understand and carefully manage. Customers are very cautious about making changes on a particular tool that may impact pattern accuracy across the full chip. We’re developing products for full-chip pattern fidelity metrology and control to help address this challenge and enable advanced control solutions.

Dougherty: I very much agree. Especially for logic foundry operations, where you’re dealing with a wide variety of design styles and die sizes, we’re always finding a unique sensitivity for a given product. How do you manage that? They don’t all behave the same way. I think of the tuning question as a double-edged sword. On one hand, we continue to add more knobs to tools, and that’s warranted. But also, just by adding a knob you’re going to add something else that can vary. So yes, it does give you the capability to tune more, but it’s also something you have to keep in the box.

Gottscho: It takes time, as well, and if you’ve got a high mix of different products, you don’t have a lot of time.

Dougherty: Absolutely.

SE: So are we matching tools, or are we matching final structure, or are we moving to co-optimized processing?

Dougherty: It’s all of the above. There really is no single path. You have to start with the fundamentals of matching tools and driving process capabilities to the greatest degree possible. At the end, you’re going to have to do some offset management. There’s no way around that. But then, understanding all of that during the design and development stage is critically important. You need to think about things like run-to-run control. That’s not really something you can stick on top of the technology after it’s done. You have to keep in mind that these are the things that tend to vary, and you need to build that into your known process window during the development stage.

Wolfling: The design needs to take part of the burden of the scale. As part of this co-optimization, you cannot say that the design is independent and eventually the manufacturing will deal with whatever the designers do. The burden of scale goes back to the designer so that eventually this can be manufactured. This is not ideal, but that is a fact of life going forward with advanced nodes.

Dougherty: There is some room for that, but at this point the end customer community has been trained to say, ‘This is the design flexibility that I require in order to deliver this process to you.’ There might be a little bit of leverage there, but not that much.

Gottscho: Actually, it’s going to go the other way as we improve our control capabilities, unit process, and then the next step is co-optimization in litho and etch routines. You can get all of your etchers to match one another, and then the incoming material has variability so that matching is of limited value. If you try to tune this etcher to that scanner, that kills productivity. In the end, you have to do some co-optimization, or at least co-process control, and as we beat down the variability by taking a systematic approach—subsystems, systems, and then co-optimization across unit processes—we’ll open up the design constraints. Today, my impression is that foundries give customers fairly conservative design rules because they have to yield devices for those customers even with all the variability in the fab. So if you collapse all of those things down and tighten things up, or provide them with less conservative design rules, that might be worth a generation or two of shrink.

Dougherty: You’re right. Every fab has a wafer review board. A customer comes in with, ‘Yeah, I know that’s what I’m supposed to design to, but I really need this one feature.’ We have to have a business process for what violates the rules that we’ve already established. At the end of the day, you’re expected to yield sufficiently.

Related Stories
The Next 5 Years Of Chip Technology (Part 1)
Scaling logic beyond 5nm; the future of DRAM, 3D NAND and new types of memory; the high cost of too many possible solutions.
Chipmakers Look To New Materials
Silicon will be supplemented by 2D materials to extend Moore’s Law
What’s Next With Computing?
IBM discusses AI, neural nets and quantum computing.
Starting Point Is Changing For Designs
Market-specific needs and rules, availability of IP, and multiple ways to solve problems are having a huge effect on architectures.
Chipmakers Look To New Materials
Silicon will be supplemented by 2D materials to extend Moore’s Law.
3D Neuromorphic Architectures
Why stacking die is getting so much attention in computer science.
What’s Next For Atomic Layer Etch?
Technology begins shipping, but which approaches work best, and where, is still not clear.



1 comments

MD says:

Excellent article – tool to tool matching was the start of a whole lot of things that came up – especially at the leading edge.

Leave a Reply


(Note: This name will be displayed publicly)