Experts At The Table: Yield Issues

First of three parts: Litho qualification; differences between memory and logic; intrinsic defects; low k; modeling; process variation; why some companies get it right and others don’t.


By Ed Sperling
Semiconductor Manufacturing & Design sat down to discuss yield with Amiad Conley, technology marketing manager for yield and process control at Applied Materials; Cyrus Tabery, senior member of the GlobalFoundries technical staff for lithography development and DFM; Brady Benware, engineering manager for diagnosis and yield at Mentor Graphics, and Ankush Oberai, general manager of the Fab Analysis Business Unit at Magma Design Automation. What follows are excerpts of that conversation.

SMD: What issues are we facing with yield?
Conley: The biggest issue is litho qualification. When a new product comes into a fab there is a prototyping stage. Then it moves to a pilot line and then manufacturing. The process variation is not seen at the prototype step, though. At 32nm and below there are intricate process and layout interactions that cause failures of specific geometries that are not foreseen by the simulation tools. Production fabs are dealing with what they call hot spots. It’s not new—we’ve been hearing about hot spots for the past couple years. But there are more and more. We need a way to systematically feed this information back to the design tools. We’re not there today. If we can get the design tools with constraints to design future products and avoid these problems, then it will be a big win.
Benware: Yield is a broad topic and people have been addressing it for some time, but there are also some things happening now requiring new techniques. We’re seeing the irregularity in logic creating new failure mechanisms. That affects the kinds of tests that must be done and the tools that are needed. In the past it was sufficient to ramp yields through characterization vehicles or memory structures on SoCs. Logic is the big problem now. People don’t have all of the tools, the flows and the methodologies in place today to deal with this.
Oberai: The biggest challenge is the surge in intrinsic defects. We’ve been doing yield management for about 16 years in the fabs. We’ve gone from physical issues to systematic issues. There are still physical defects, but there are others you can’t see because they’re more electrical in nature. The other challenge involves simulation, which has reached its limit. A lot of that stuff doesn’t work anymore. You have to get down to the tool level where the integration of the analysis at the tool level is essential to get a rapid feedback loop to the design variations. You can’t wait until you pass several steps because a lot of those things are going to get buried and you won’t be able to detect them.
Tabery: The newest challenge is in very low k factors in lithography. We’ve pushed the limit on this, and the real key to that is simulation. [Ankush Oberai] said we can’t catch this with simulation. That’s wrong because atoms are fundamentally amenable to simulation. We have to simulate everything and measure everything. At all levels of abstraction and design we’ve built great tools for simulation. Further synthesizing the design and taking metrology data out of the fab and taking simulation data into the fab are the real enablers for making these very low k factors actually work. At 32/28nm and 22nm, the challenge is orders of magnitude higher than the 0.5k factor we’ve traditionally had. That leads to new design sensitivities such as these hot spots that we need to find. We can’t just scan one wafer and find them, and if it’s a part-per-hundred or even a part-per-million flaw then we’ve got a dead chip because we’ve got a billion transistors or more. If it’s a memory cell and there’s redundancy, then we need a different kind of reaction.

SMD: Everyone is pushing for a higher level of abstraction, but we also need to go deeper and more granular. What’s the right approach for these very complex chips?
Tabery: We need to abstract at all levels carefully, but that doesn’t mean we can abandon these other levels of abstraction. We measure everything and model everything. We need to model system-level power, gate oxides, etch transfers and line-edge roughness—everything has to be modeled. That will allow the high levels of abstraction to be successful. The abstraction works because the whole system is coherent and modeled throughout.
Oberai: The other thing that is critical is to correlate the different models to each other. We’re bringing in the different aspects of the design with the different aspects of defects and looking at the behavior of one step to the other and see how the defect propagates. Keeping track of that and keeping a correlation of electrical and physical effects has allowed us to determine what really causes the killer defects. That goes back into the design phase. What companies like Mentor and Synopsys have been able to do is create diagnostic tools that determine the root cause and correlate with defects at different stages.
Benware: The lithography, the manufacturing process, the OPC, the CMP—all of this has to be considered in the design. What we’re seeing is that’s being done to different levels of aggressiveness. Different companies are approaching this from different perspectives, and that’s introducing new variables into the fabless model. All of these fabless companies are doing DFM to different levels. What we’ve seen is that from the bleeding edge to the laggards, some companies have done more and gotten lucky. They were aggressive on the things that mattered at a particular node. Others weren’t so lucky and they were aggressive on maybe some aspects that didn’t turn out to be so process sensitive. You still need to have a good strategy for dealing with failures after you have a design in place. More techniques are needed for the failures after design. What we’re also seeing is that companies that follow the bleeding-edge companies are no longer able to take advantage of all the learning that’s gone on. Fabless companies that have been pretty hands off for yield learning are reluctantly taking a more active role driving yields on their designs.

SMD: How much is DFM invading the equipment space?
Conley: Our objective is to increase the role of design enabling a unique CAD-based system for the fab. This is what we’re trying to do with Magma right now. We have a unique CAD framework, no matter which supplier is there—whether it’s Mentor, Synopsys or Cadence. With the different inspection tools we will be able to find these hot spots and feed them backward to the designers for future products and enhancements.
Tabery: Do you have a vision of integrating the design-based metrology CD-SEM (critical dimension-scanning electron microscopy) world and design-based inspection? I’m not sure how linked the inspection setup is to the design. But then you have the data analysis coming to fruition. I view these as three separate areas. What’s the likelihood of integrating those over the next one or two years? Over the next five years it has to be done. There’s no choice.
Conley: It’s a process. It will not be done at once. But it will be integrated.
Oberai: We have a Knights database that runs across all the EDA tools. We are creating a database of all elements of design as well as the fab data. KDB will be able to exchange the defect information with the CD-SEM information and the review information.
Tabery: The Knights database has been used for failure analysis for a long time. Now they’re trying to integrate with the in-fab tooling.

SMD: How much of an issue is process variation and how do we deal with it?
Conley: Process variation does play a role. Yield is a combination of the process window with the process variation. If you have a wide process window then you are safe. If you have a narrow process window and process variation you are prone to failures. This relationship is becoming more difficult to manage in the fabs. This is why there are many more locations in the die where the product window is narrow and cannot be predicted. They might be narrow only on a certain lithography stepper. They can be manifested because of the tools in the fab. Some kind of interaction between a local layout to a specific trend is something even the fab doesn’t understand. They get results for this stepper and this product, and at the end of the process they can pinpoint it to a certain geometry. But if we can find this geometry early in the process then we can monitor it more closely and feed it back to design.

Leave a Reply

(Note: This name will be displayed publicly)