Chasing Rabbits

Keeping up with double-patterning rule changes isn’t easy, but it is possible.

popularity

“Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!”
—Lewis Carroll, Through the Looking Glass

By David Abercrombie
As I discussed in my previous article, the use of stitching can greatly reduce the number of double patterning (DP) decomposition violations that a designer has to resolve. However, stitching also adds significantly increased complexity—the decomposition tool must process many additional design rules to generate legal stitches and know how to use them properly during coloring. Figure 1 illustrates a few of these rules.

 

Figure 1: Rules constraining stitch insertion during decomposition.

Figure 1: Rules constraining stitch insertion during decomposition.

The initial challenges to automating stitch generation and layer decomposition are: 1) capturing these constraints within the syntax of the tool language, and 2) adhering to the complex combination of rules. Figure 2 shows how various rules affect the determination of where a valid stitch candidate can be generated. The separators themselves determine locations where stitches cannot be placed due to spacing constraints (because a stitch contains both colors, it always “violates” a spacing constraint between two colors). Minimum area constraints mean you can’t place stitches too near the end of a line. And certain types of corner stitches are not allowed, such as the elbow in this example. Once you’ve applied all those constraints, you are left with the potential stitch locations. But a stitch is not allowed in one of those locations, because the stitch itself must have a certain required overlap (minimum length). In the end, you are left with only two possible stitch candidates.

Figure 2: How design rules affect the determination of valid stitch candidates.

Figure 2: How design rules affect the determination of valid stitch candidates.

Even more complexity arises when the creation of a stitch introduces new separator requirements. Think of it as a “chicken and egg” problem—you need to know what separators exist to know where a stitch can go, but adding a stitch can introduce new separators. As shown in Figure 3, one stitch can exclude the validity of another stitch, or introduce color separation requirements between shapes that previously did not have color separation requirements.

 

Figure 3: How the insertion of stitches can introduce new separator constraints.

Figure 3: How the insertion of stitches can introduce new separator constraints.

Despite these immense complexities, it has been possible so far to produce automated tool functionality that captures and applies all of these rules to produce successful layer decompositions. However, capturing these rules in a tool requires the creation of specific syntax in the programming language for each rule type. Although it is easy for a user to modify specific aspects of a given rule type, and to add multiple versions of each rule type without affecting tool capability, new rule types require tool enhancement. The biggest problem with DP in the industry right now is the rapid rate of rule changes while new processes are in the early stages of development and process maturation. This rate of change delays the implementation of new rules into a PDK, because engineers must wait for the vendor to enhance the tool and release a new production version.
For this reason, a new approach to implementing these capabilities in the tool is required, one that allows more flexibility. In a nutshell, the requirements for a good decomposition algorithm are:

  1. Capture all DRC rules (constraints) associated with the generation of a legal stitch;
  2. Decompose the layout and determine the odd cycles;
  3. Generate a stitch wherever needed to break odd cycles;
  4. Minimize the number of stitches needed to eliminate conflicts, and
  5. Highlight remaining cycle violations where a legal stitch was not possible.

The traditional EDA approach is to combine all of the above requirements into a single operation that generates the stitches and colors the layout. To make this method work, all DRC rule types must be coded into the core of the tool. If a new rule type is added, a change in the core code is mandatory, creating a delay in the tool development and release cycle.

Particularly in the early years of a process development, it is not reasonable to assume that the rate of tool development will always be faster than the rate of the need to adopt new design rule types. For this reason, we can separate the process into two steps, as shown in Figure 4. In the first stage, all the regions where a stitch can be placed while honoring all the design rules are marked. In the second stage, the layout is decomposed using only those stitches that will resolve DP conflicts.

Figure 4: The new two-step approach to decomposition with stitches.

Figure 4: The new two-step approach to decomposition with stitches.

The benefit of breaking this process into two steps is the opportunity to insert custom layer derivation operations within the coding of the decomposition function. These custom code insertion points enable the deck writer to generate additional stitch candidates (or filter existing stitch candidates) to meet new rules that must be considered, but are not yet natively captured in the tool syntax. Figure 5 illustrates this new flexibility.

 

Figure 5. Customization opportunities available in a two-step flow.

Figure 5. Customization opportunities available in a two-step flow.

In this customized flow, we now can define custom constraint layers that will affect the behavior of the stitch candidate generation before generating the stitch candidates. After stitches are generated, we can filter out stitches or derive additional new stitch candidates to accommodate new rules. Before the decomposition step, we also can define customized separators (if necessary) and pass them to the splitting step. Finally, after the decomposition is run, we can modify the generated colors (if necessary) to meet new rule requirements.

The benefit of this flow is the ability to accommodate the high rate of rule changes early in the process development much more effectively, while allowing sufficient development time for building and testing new native support functionality in the tool. As the native functionality becomes available, the customization code can be removed. This also enables the foundry to qualify a decomposition solution without having to wait for the final tool development. Overall, a mutually beneficial situation is created that allows rapid response time for the customer and appropriate tool development time for the EDA vendor.
This flow has already been used on 20nm technology node layers with excellent results. For end users using the PDK deck for cell and chip design, the experience has been the following:

  1. There is no noticeable difference in execution or results output visible to the designer, regardless of how much customization versus native functionality is used in the deck.
  2. The decomposition still takes place automatically.
  3. Any odd-cycle or anchor-path violations that are not fixed by stitches are displayed normally.
  4. Mask coloring output layers are generated normally.

No matter how fast we have to run to keep up with the development of DP design rules and checks, the two-step decomposition flow can keep us one step ahead.
In my next blog, we’ll start looking ahead to the multi-patterning challenges and solutions coming with 10nm process technologies.



Leave a Reply


(Note: This name will be displayed publicly)