I Just Want Closure!

Tapeout at 20nm and below is becoming interesting, and the checklist is getting longer.


By Jean-Marie Brunet
We all know it by now, but let’s say it one more time for the cameras—the level of complexity of closure at 20 nm and below is considerably higher than for any previous nodes. While the migration of manufacturing requirements into design started with a few suggested activities at 65 nm, such as recommended rules compliance, lithography checks, and critical area analysis (CAA), jump ahead to 20 nm, and design teams must now incorporate a lengthy list of requirements (Figure 1).

Processes such as double patterning (DP) design, lithography simulation, and fill optimization have added significant complexity to the design team’s workload and put unprecedented pressure on tapeout schedules. Let’s take a closer look at some of the more significant contributors to the signoff challenge at 20 nm and below.

Figure 1. Changes in signoff requirements across nodes.

Design Rule Checking
There are not just a few more, not just a few hundreds more, but thousands more design rules when moving from 28nm to 20nm (Figure 2). There is an equally substantial increase in the number of rules when moving from 20nm to 16nm. And rule complexity is skyrocketing as we add new DRC constraints such as DP compliance and voltage-aware DRC. Designers must understand and incorporate these new requirements and increased verification complexity, while still managing their tapeout timeline. At 20nm and below, DRC is exponentially more difficult to define and execute, which has real-world implications for IC design teams with limited resources and tight delivery schedules, and IC companies who need to get these cutting-edge products to market quickly with high yield to stay competitive.

Figure 2. Design rule growth over nodes. (Source: Mentor Graphics)

Double Patterning
Double patterning is worth its own discussion, because the scope of knowledge required to effectively implement and verify DP-compliant designs is extensive. DP is now required, due to the lag in bringing next-generation lithography tools into production. Splitting a layout into two (or more) separate masks allows us to relax the layout pitch back into a k1 value that can be imaged with existing lithography tools and optical process control (OPC) solutions (Figure 3).

Figure 3. Solutions currently applied to the k1 challenge.

At 20nm and below, metal layers must now be designed to be DP-compliant. In fact, DP constraints affect many different stages in the design flow—they influence the design of cells , the metal 1 layer, the backend layer, placement, and routing (especially power and ground). These constraints must be integrated into place and route process, as well as the physical and electrical verification processes, to avoid costly redesigns or manufacturing failures. Furthermore, DP errors can be difficult to debug, and (unlike traditional DRC errors) often have multiple solutions, costing design teams precious time and resources (Figure 4).

Figure 4. DP errors can be challenging to debug, and often have multiple solutions that may not be readily apparent. (Source: Mentor Graphics)

Automated implementation during place and route, along with comprehensive DP debugging guidance during verification, are essential to producing DP-compliant designs in an effective and timely manner (Figure 5).

Figure 5. DP debugging assistance is crucial to timely and accurate resolution of DP errors. (source: Mentor Graphics)

For an in-depth look at the challenges DP is presenting to designers, I suggest David Abercrombie’s excellent series about the impact of DP on advanced node design and verification.

Lithography Simulation
Lithography simulation continues to be a mandatory step for tapeout, albeit an increasingly challenging one. The industry continues to use 193 nm immersion lithography while design teams are drawing 20-16 nm designs (Figure 6).


Figure 6. Standard cells are shrinking with every node, while the lithographic optical diameter remains the same. (source: Mentor Graphics)

Many complex reticle enhancement technology (RET) and optical process control (OPC) techniques are now required, but even so, containing process variability is reaching new heights of difficulty. Lithography simulation during the design process is now essential to identifying and removing/modifying “litho hotspots” that are likely to cause manufacturing issues. Fortunately, at 20 to 16nm, the lithography simulation that is run in the design stage as part of the mandatory signoff requirements is becoming manageable in terms of runtime and CPU count.

The industry reached an inflection point at 20nm that generated a shift in fill strategy from minimization of the amount of fill used to one of maximization. While automated tool capabilities quickly developed to implement “smart” fill strategies that optimized both the shapes and placement of this increased amount of fill (Figure 7), the biggest issues that still remain to be solved are runtime and file size.

Figure 7. “Smart” fill techniques optimize the fill produced by a maximization strategy. (source: Mentor Graphics)

With fill maximization, post-fill design files can grow by a factor of 5x. This increased file size is particularly problematic in place and route (P&R). One solution has been to keep fill data on a separate file, and point the P&R tool to that file, rather than bringing the file into the P&R database.

However, at 20nm and below, changes in fill techniques introduce new design ramifications, as well. For example, fill is one component of DP compliance that must be considered when balancing masks, to ensure proper light emissivity and etch processes. Furthermore, with the introduction of FinFET structures, front-end layers are also now affected by fill requirements. This new type of transistor is, by design, non-planar, and the fill requirements are for multi-layer structures, where almost all the fill is on the Fin grid.
Additionally, because fill is such a significant part of the layout now, it is almost impossible to close a design without extracting production fill first. The insertion rate of fill is such that, for both back-end and front-end layers, the production signoff fill must be visible while closing any part of the design from a timing perspective. Design teams must plan for this extraction step in their process flow schedules.

All in all, design teams must be familiar with new fill requirements and techniques at 20 nm and below, and make the necessary modifications to their process flows to ensure that the appropriate fill methodology and data is incorporated into their design and verification procedures.

Given all of these new and challenging conditions, how will design houses prepare their designs for tapeout closure in a timely and efficient manner? It is becoming obvious that it is nearly impossible to verify designs without referring to signoff rule decks at every stage of the design flow. Validating designs with limited rule decks, or rule decks that have not been qualified, will almost certainly lead to significant rework and verification iterations, given the complexity of signoff requirements at 20 nm and below. Solutions are emerging that will enable designers to lay out, verify, and debug designs using analysis capabilities based on qualified signoff decks. With the confidence of knowing that their results are based on signoff requirements, designers will be able to prepare designs for closure quickly and effectively, even with the myriad of new tapeout requirements at 20 nm and below.

Leave a Reply

(Note: This name will be displayed publicly)