Can Mask Data Prep Tools Manage Data Glut?

With design post-OPC data file sizes reaching hundreds of gigabytes at 28nm, mask data prep tools handle the complexity for yet another node.

popularity

By Ann Steffora Mutschler
The trend to reduce critical dimension sizes has in turn increased design file sizes, especially with the addition of optical proximity correction (OPC) steps. This extra data translates to a bigger burden to be processed downstream in the flow on the way to the mask writer.

At 28nm, design post-OPC data files sizes reach hundreds of gigabytes. With 20nm and below technology demonstrating that complexity due to correction techniques is increasing, engineering teams are facing terabyte-size data files.

There’s a bright side to this. As Steffen Schulze, marketing director for Calibre at Mentor Graphics, pointed out, the distinctive thing about 20nm is that one more node that will be accomplished with 193nm lithography. That means existing tools will be stretched for another node, as well.

However, from a data flow perspective, the node in general continues the growth in content, so a large system-on-chip exploits the entire field and uses the shrink to implement or incorporate yet further content, he said. This traditionally has driven data volume of the files that need to be moved and impacts data processing as well as mask making.

Also, in order to stay with 193nm lithography, double patterning solutions are deployed now in manufacturing. “That means that a single layer in the silicon stack is accomplished by splitting up the process into two masks, two processing sequences and each of those masks is not necessarily getting simpler because the wavelength’s delta to the feature size requires the application of graphic OPCs to the area,” said Schulze. “And these masks get filled with assist features and the graphic OPCs, so each mask is as complicated effectively as the single masks were in prior nodes. And since data patterning comes into play at 20nm the data processing needs to execute this double patterning step. In other words, the data has to be decomposed. In some cases, it’s been given to designers to accomplish that, in other cases there are tools that downstream, execute this decomposition into the two masks. But it’s an extra data processing step and requires some form of verification. Subsequently, once the data is effectively represented in two masks, then those masks have to be passed through the regular data preparation process, meaning, applying OPC, verifying OPC, inserting scatter bars.”

Mentor Graphics and others have technology to keep the information correlated and that conduct the OPC for the separate layers with awareness of each other. This helps to prevent overlay issues or correction failures that by virtue of edging on the opposite sides of a tolerance band creates critical interactions between the two masks when they get exposed. “There is new technology that is brought in to facilitate that, but under any circumstances the overall computation effort to conduct those steps increases,” he continued.

Paul Ackmann, GlobalFoundries fellow for reticle technology, agreed. “The complexity of mask making increases with the need to use multiple masks to image a single construction layer (Metal 1).”

Anjaneya Thakar, product marketing senior staff for the Silicon Engineering Group at Synopsys, was quick to point out that while there is no clear inflection point, there is a steady pressure to do things better. “The OPC data that comes into mask making gets bigger because you’re doing more OPC, so one of the things that mask data prep solutions have to address is the fact that they should be able to handle huge amounts of data. A half a terabyte we’ve already breached. There are designs that are about half a terabyte. This is the data coming into the mask data prep solution. What it writes out that eventually the mask writer takes and writes is even bigger. So you have to have a mechanism to really generate compact data out that the mask writers use.”

Managing the data
With these data sizes, customers are understandably complaining.

There are two ways to manage this. “First, we continue to improve the algorithms in the tools themselves to be more efficient and there is a long standing track record where we track the performance of our core tools that it continuously improves,” said Schulze. “It’s an attempt to have a 2X improvement per year is the goal. But that’s not enough. The other metric is to maintain the ability to control the computational effort with additional hardware. It’s being able to the deploy a larger amount of CPUs per job such that they would still be used efficiently, and then on a general scale of a tapeout organization to also enable them to operate compute systems of bigger size when you assume the customer has to do multiple things simultaneously. So we’re talking past 10,000 cores for large clusters and past 1,000 cores per job.”

Specifically, Thakar has observed users employing distributed processing. “MDP solutions routinely use hundreds of cores of CPUs to get the job done and I foresee that in the next 12 months we will be breaching the 1,000 core limit also. The reason is twofold: You need that power and to compute, hardware has become fairly affordable to go to 100 core infrastructure. The way they handle the turnaround time is by using bigger and bigger clusters.”

GlobalFoundries’ Ackmann recognizes data optimization and bandwidth from tape-out through mask making is a critical operation and the company continues to drive investments to ensure that the wafer technology meets the customer design requirements in a timely manner.

Another obvious angle to these challenges is that as dimensions get smaller and engineering teams are trying to use the same light to print, what they thought was a square is now a circle so very fine accuracy has to be maintained moving from 28 to 20 to 14nm, Thakar said. “What that means is the MDP software has to fracture the data in a way that the quality of printing is preserved. If there was a couple of nanometers of CD uniformity in the previous node, for example, it might not be that big a deal because we are printing slightly bigger devices but that same CD uniformity can now cause a device to fail. This is just one example. The CD uniformity in mask has to be much finer at 20nm and below than it was previously, but it’s a continuum. There is no sudden inflection point, which is kind of common sense. As you’re printing smaller stuff, you have to have finer quality.”

He concluded that the mask writing tools will continue to be the same as they are today. “The challenge is then as the data size gets bigger and as the accuracy needs to get finer and finer, the processing time for mask data prep becomes larger and larger.”



Leave a Reply


(Note: This name will be displayed publicly)