Design for manufacturing (DFM) refers to actions taken during the physical design stage of IC development to ensure that the design can be accurately manufactured. At larger nodes, most of the defects in the IC manufacturing process were due to out-of-tolerance process steps, i.e., a macro level variation, or random particles interrupting the flow of light through a layout mask during a lithography printing step or being embedded in a layer of the wafer itself. Process control feedback loops and cleanrooms are effective in controlling these two mechanisms respectively and producing high yields.
However, as we moved from the 90nm node through 65nm, 40nm, 32nm, 28nm, 20nm and 16/14nm, the industry had expected that we would transition to EUV lithography to take advantage of shorter wavelengths for the lithography process. But due to delays in deploying EUV technology, the industry is still using light-based steppers with a wavelength of 193nm.
Diffraction effects become significant when the light interacts with objects and slits approaching the dimensions of the light wavelength. Semiconductor manufacturing has moved far beyond that threshold. From 130nm to 65nm, resolution enhancement technologies, including optical proximity correction (also referred to as computational lithography), were able to deal with the distortions caused by diffraction effects. This involves modeling the expected light distortions and making changes to the masks to correct for them so the resulting exposure on the wafer is correct as intended. Fortunately for IC designers, the RET step follows tapeout, so it has no impact on the design.
Unfortunately for IC designers, RET is not able to correct all the problems once you get into the smallest nodes. As a result, IC foundries had to add design rules, and designers needed to make design changes, to eliminate or modify features in the layout that cannot be accurately manufactured. At each node, the DFM rules become more complex, and the range of effects broaden in scope. For example, at 20nm, the rules governing fill shapes routinely placed in a layout to improve the consistence of metal across the die become far more complex, and the number of fill polygons increases by one or two orders of magnitude. The following sections have additional detail on specific DFM methodologies.
Litho Hotspot Analysis
Lithography (litho) analysis involves simulating the effects of light diffusion and the impact of variations, like depth of focus and light intensity, on the rendition of intended shapes on the wafer. A litho analysis tool gathers data about how the design will print over a range of conditions, such as variations in dose, focus, mask position and bias, not just at the optimal settings. These variations are referred to as the “process window.” The tool then predicts specific areas of the layout, i.e., shapes or configurations of shapes, that may result in a defect such as a pinched, broken or shorted interconnect wire. The impact of the process window is shown as bands on a graphical depiction of the manufactured layout. The bands show how the features will be rendered on the wafer at different values of the process window variables. Designers can review the hotspots and process variable bands and make improvement to the layout where needed.
CMP Hotspot Analysis
CMP hotspot analysis looks for areas of the design that have a higher than average probability of experiencing defects due to chemical-mechanical polishing (CMP). Since different materials will exhibit different erosion rates under the CMP process, it is important to maintain a density balance across the die to prevent bumps and dishing that can cause shorts and opens in the metal interconnects. CMP analysis measures various aspects of the layout to ensure even planarity as the chip is built up over multiple layers. Typical measurements include maximum and minimum metal (copper) density, density gradient over a defined window, density variation across the die, and the total perimeter of polygons within a window. Areas that exhibit characteristics outside the established guidelines are identified and can be modified as needed.
It is important to note that the “fill” procedures normally performed at the end of the design process to add non-functional shapes in any empty (white space) areas of the design, were initially aimed at improving the uniformity of metal density across the die in order to help with CMP results. As described under Advanced Fill, at advanced nodes the role of fill has become much more complex.
Critical Area Analysis
Critical Area Analysis (CAA) looks at an IC physical layout to determine if there are areas that might be susceptible to a higher than average rate of defects due to random particles. Defects typically occur when a particle causes a short or open in a chip interconnect. This is more likely to happen where the interconnect wires are either close together or minimum width wires. CAA performs an analysis to identify so-called “critical areas” based on the spacing and dimensions of layout shapes, and the concentration and size distribution of particles in the cleanroom environment. Designer can then perform modifications, such as taking advantage of white space by spreading wires further apart or to widen wires in order to minimize these critical areas.
As smaller nodes, a significant number of defects result from poor via formation due to bubbles accumulated at via stress points, or due to a random particle at a via location. These issues can result in open connections or connections with excessive resistance. In addition, via transitions with insufficient overlap with the connecting wire can also contribute significantly to yield loss. A simple solution is to place two vias at each transition, yet doubling every via leads to other yield-related problems and has a impact on design size. Plus, changing vias can result in a need to reroute the design and can create DRC violations.
Special DFM tools can identify via transitions and make recommendations in the context of the specific layout as to where second via insertions are needed, and where they can be added without increasing area by taking advantage of white space. They can also minimize parasitic impact by proper orientation of the via, i.e., in line with the existing interconnect wire. Such tools can also expand the overlap of metal over the via to improve connectivity and reduce the potential for defects.
Critical Feature Analysis
Critical Feature Analysis (CFA) looks for specific shapes or groups of shapes that are known to be difficult to render on the wafer, and therefore have a higher probability of causing defects in the IC. These features are typically identified through simulation (as in the litho hotspot analysis described here), or though physical failure analysis on test chips or defective parts. There are a number of ways to specify and identify these features. Designer can write traditional design rules that can check for them during physical verification. However, it is quite difficult, and in some cases impossible, to define the most complex critical features using the table based approaches in standard design rule checking (DRC) tools. If the tool supports it, another approach is to write an equation that describes the characteristics of the troublesome features. The checking tool then identifies all features where the design rule equation falls outside specified limits. Although this is useful for some categories of critical features, it can’t address features that don’t lend themselves to an algorithmic description. For this reason, a new approach has been developed, which is based on pattern recognition (see Pattern Matching).
As described under Critical Feature Analysis, some critical features cannot be described in a simple way—they are just too complex. To help designers find these complex features, physical verification and DFM tools now employ pattern matching techniques. That is, a designer can simply copy a particular feature (portion of a layout) that has been identified as a problem and place it into a pattern library. The DFM tool can then search an entire layout for all occurrences of patterns in the library. Of course, the tool must be intelligent enough to account for different orientations and placements of the pattern, and even small variations in dimensions of the pattern, as specified by the designer. Once the problem features are identified, designers can take appropriate action to modify or eliminate them. This topic is treated in greater detail in a separate Knowledge Center entry under Pattern Matching.
It is apparent from the preceding discussion that DFM is not a single technique or tool—it’s really about addressing a variety of manufacturing issues and constraints into the design process itself. DFM practices ultimately are focused on improving yield and preventing defects that could present after a product has been delivered to a customer. Unfortunately, they also mean more work, which means more engineering costs and delayed time to market—a classic engineering tradeoff. Designers need a way to tell when “enough is enough.” That is the purpose of DFM Scoring, which is a methodology that uses a set of measurements of the physical layout to determine if the design is good enough to realize an acceptable yield, or whether it is a good investment to spend additional time and effort to address some of the most critical DFM issues remaining in the design in order to improve yield. These methodologies tend to be foundry driven, because they are based on the design rules specified by the foundry, which are based on their experience and data from test chips and production chips at each node.
Original page contents provided by Mentor Graphics