Second of three parts: Problems without solutions; tracking the causes of hot spots; defining acceptable yield; economic considerations beyond yield.
By Ed Sperling
Semiconductor Manufacturing & Design sat down to discuss yield with Amiad Conley, technology marketing manager for yield and process control at Applied Materials; Cyrus Tabery, senior member of the GlobalFoundries technical staff for lithography development and DFM; Brady Benware, engineering manager for diagnosis and yield at Mentor Graphics, and Ankush Oberai, general manager of the Fab Analysis Business Unit at Magma Design Automation. What follows are excerpts of that conversation.
SMD: Does the information that comes back from the tools in the fab help to pinpoint problems in a design?
Conley: Yes, we can find these locations from the first blocks in the line and fabs can monitor them more closely.
Oberai: Toshiba has shown how process variation happens at different tool levels. They’ve shown how with different tool clusters the process window makes a variation. They had to go back and tweak the tools. One of the key elements pinning design defects on variation. We’ve been able to improve tool efficiency by big numbers. This is more about time to money than time to market—how quickly can you get your prototype working.
Benware: The variability and the irregular structures we’re dealing with in the logic and the routing at the back-end and the design are producing situations that can create systematic defects and hot spots and make it very difficult to manufacture. We continue to see technology that helps with inspection even as we get to very small geometries. Design-based inspection is very viable. Inspection has always been there for the fabs. We continue to see that as a tool in the arsenal because of the technology advances. But there are other trends that don’t have a solution today. This variability is creating timing failures, power failures and things that aren’t defects.
Oberai: Yes, these are intrinsic things.
Benware: Instrinsic and parametric. And it’s not something that can easily be identified in the manufacturing line. If you can’t catch it in the fab and do a short-loop correction then you’re waiting for test, where you have to first understand the problem and then identify what’s wrong, which is a real problem because there’s nothing to see—it’s all electrical. You really need tools in the back end that are post-test that allow you to identify what the problems are and what the root causes are. Realistically you’re not going to see a defect.
Oberai: We have the luxury of hierarchical levels in design. In silicon everything is flat. There are techniques to look at layers separately, but the behavior of the step is the composition of the different steps together. What really happens. You’re targeting chemicals, vias connecting different layers, and it’s very hard to simulate that in the pre-silicon phase. The design playing a role within the silicon itself is required now. You have to be able to correlate the killer defects vs. the nuisance defects.
SMD: Is the notion of what’s acceptable yield changing from one geometry to the next?
Tabery: This all has to be cost-effective. It’s fundamental that we have a model for the process variability. You have to model everything. You need a complete model for dose variability induced by hot plates in post-exposure bake and a model for reticle variability from placement. Everything needs to be very deeply understood and modeled, and then we come up with a variability budget. It’s the same thing for transistors we don’t see as hard failures. A mask combined with some aberration with a stepper focus—we see that and model it with lithography simulation. But the tricky ones are things like global chip-scale leakage. It’s not a killer. It’s a little warm and you can still sell it for a quarter of what you could have gotten if it was at the same frequency spec with less power. That’s not a very good business, though. So whether you’re talking about yield or process variability, you need a cost-effective chip or no one is going to buy it. The key for both process variability and its effect on yield, or yield in general, is that it has to be cost-effective. If you introduce new yield mechanisms and new processes and a shrunk design and it’s more sensitive, that’s a problem. If you make the chip smaller and your yield is about the same, that’s very good. It has to yield the same.
Oberai: The cost factor is critical. Most of the advancements have taken place in pre-silicon and the early part of the silicon process. The back-end has not made a lot of advancements, especially packaging. If packaging were to make advancements you could have less dense chips—providing you could couple them together. The cost of processing would be cheaper, too.
Tabery: We’ve talked about the cost per function of having a CPU and a GPU on one chip. But the process and design for a GPU are different than a CPU. That’s expensive. So 3D integration will buy us a couple of nodes, which we really need. It’s a more cost-effective, highly integrated system in a package.
SMD: Are you talking about 2.5D or 3D stacking?
Tabery: We’ll do chips with interposers first. That adds a lot of value.
SMD: From the equipment and design perspective, does the definition of acceptable yield change as we move forward to new geometries?
Benware: It comes down to profitability. You may see more creative ways of defining yield and identifying profitability. 3D packaging is one area where people will look at yields differently. That technology introduces its own challenges of yield, such as known good die. It won’t be just the traditional metrics.
Oberai: The way fabless companies are buying models from foundries is changing. One company just changed its model from good die to wafer-level. Customers are willing to take the risk of acceptable yield models. People are getting to the point where time to market and time to money has precedence. The tools don’t exist for some of the nodes today. There are no tools for 18nm.
SMD: The alternative is more constraints and restrictive design rules, right?
Tabery: It’s not a good trend, and there are things we can do to reduce the impact. But here is value in restricting the design space. If we can minimize the number of things we have to think about and qualify, like OPC, and simulate and measure, there is a big value. We’ve done restrictive design rules in the past. Some companies have done more than others. If we’re asking our customers to deal with added costs because it’s so slow to deal with these design rules, or designers find that if they move an edge and it doesn’t work and they give up and go to lunch, that’s not a good solution. Standard cells help, but it’s not enough for 20nm.
SMD: DFM has broken down the barriers between design and manufacturing. How does it affect the design itself?
Benware: From an EDA perspective the number of design rules is increasing. The complexity of the design rules is increasing, too. The failure mechanisms are no longer 2D rules. They’re very complex because 2D geometries don’t model them well. We’re trying to do fast litho simulation, for example. You can’t write a 2D geometric rule to emulate what’s going to happen in lithography. You see the litho simulations, CMP simulations, and pattern matching out of DRC analysis. Here’s a pattern that is problematic. See what else is close to that. The new tools are trying to combat that. And then there’s a question of how restrictive you can be. The foundries want to be as restrictive as possible.
Tabery: And still be cost-effective.
Benware: Yes. But if you can get your design out in half the time then it comes down to a question of time to market vs. yield. For a lot of these cell phone companies their ramps peak and then go straight back down. The total amount of time they’re producing a chip is very short. Their time to market is the most important.
Leave a Reply