Mask Data Prep Issues Compounding At 20nm

Preparing a design to be printed at the mask shop has become increasingly more complex. Dealing with those issues requires embracing new technologies to prep a design.

popularity

By Ann Steffora Mutschler
When it comes to mask data prep—the step in the design and manufacturing flow that occurs just after optical proximity correction (OPC)—challenges have continued to rise with the subsequent moves to smaller geometries.

This is driven by the scaling demands of delivering about a 50% area shrink from node to node on a two-year cycle, and thus dictates the lithography roadmap. Innovations have happened on multiple levels, which included innovations on the scanners that helped attain the pace of scaling.

However, pointed out Gandharv Bhatara, a Calibre Product Marketing Manager at Mentor Graphics, ever since the transition from 32nm to 22nm, there has been very little help from the scanner side. Immersion lithography was introduced at 32nm and 28nm, source optimization added a bit of help on resolution, but the primary onus of now delivering resolution to enable shrink has fallen in the domain of computational lithography. This is driving advances in OPC resolution enhancement technology. “Because it drives these intense computational needs, one then needs to figure out downstream how you resolve any potential detrimental impacts of that, whether it is in mask making or the actual turnaround time that a mask writer has to deal with.”

The challenges fall into a few major areas of focus, he said. “The first one is process window enhancement. Here you see a lot of advances happening on the OPC side. Your features get smaller and smaller, so one starts to introduce more and more complex correction strategies and techniques. There is a concept of sub-resolution assist features; these are features that will not print but one needs to create these features on the mask so that you can get a better process window and the mask proximity correction where you try to handle some of these mask effects, but in a slightly more novel manner.”

Another area is design-manufacturing co-optimization, where the goal is to balance out process window constraints versus design constraints.

These issues happen on the upstream side of things. “The fallout turns out to be in managing mask costs and cycle time. The implication of this answer is foundries still have to respect their time-to-mask commitments, but this could increase the GDSII to mask time, this could increase the mask write time, and also the computational capacity needs,” Bhatara said.

Tom Ferry, senior director of product marketing at Synopsys agreed that there are a number of challenges to address from the perspective of turnaround or cycle time, accuracy and mask write times.

In terms of turnaround time for the fracture process, because more layers have more OPC, that translates to more geometries and a bigger database. As a result, the turnaround time for fracture is going up and it must be managed, Ferry said. Synopsys approaches this from a pipeline perspective where the OPC and mask data prep tools work together. “We break up the design into titles. We take a full design and break it up into tiles and we processed each tile independently. As soon as a tile is done with OPC we can send it immediately to fracture as opposed to waiting for the whole design to finish with OPC. As soon as one tile is done—which could be 1/100 or 1/1000 of the full chip—you can start the fracture part. The net result is instead of having full chip OPC stop, then run full chip fracture, when you do it in the pipeline way the fracture time then the total time from OPC and MDP is OPC time plus a very little time.”

The tiling approach also allows for CPU scalability, he mentioned.

When it comes to accuracy, with the move downwards in process nodes from 28nm to 20nm and beyond, accuracy requirements are getting more difficult to meet, Ferry said. “The primary measure of accuracy in the context of fracture is the critical dimension uniformity, meaning there is a maximum amount of CD variance that the mask makers can tolerate. We have to work within that boundary.”

Anjaneya Thakar, product marketing senior staff for the Silicon Engineering Group at Synopsys explained, “What you want to make sure is that if there is a line of specific width in two different parts of the die, that line will be very close to that width in both areas of the die. The algorithms we have to fracture ensure that the way that you fracture gives you the best accuracy on both those lines, no matter where they lie in the die. If that line is thinner by 20nm in one space, compared to the other, the speed characteristic of that circuit changes.”

Referred to as symmetry and uniformity, it means if there is a 500nm long wire that’s 50nm wide, and it’s in 20 places on the die, it needs to be fractured in the same way so that the dimensions are similar across the die.

Connected to turnaround time and accuracy, mask write times are a very big concern, Ferry asserted. “It sort of works backwards from the mask writers. There is a competitive thing about mask write time for the foundries. There are also practical problems that if you run a machine for more than 10, 20 or 30 hours, the machine drifts too much for it to work. If it takes more than about 20 hours to write a mask, it’s very unlikely that the job will complete successfully. And because of all this complexity and all these geometries that are creating more and more shots, obviously that’s pushing the upper limits. We’re working on things from both a hardware point of view and a software point of view to reduce write time.”

On the software side, “we have some algorithms to reduce shots on existing machines. Basically we’re working on new algorithms both model- and rule-based or simulated ways to reduce the shot count on existing machines. We’re working with the mask writer companies to do that,” he added.

Industry work is ongoing with companies such as IMS, which is developing a multi-e-beam mask writer because going from a single beam such as the VSB-type format to a multi-beam system will greatly reduce that mask write time.

In the e-beam space, startup D2S (Design 2 Silicon), a spin out of Cadence Design Systems, began as a project within the EDA giant to look at e-beam as the next thing that was going to be the discontinuity, explained Aki Fujimura, chairman and CEO of D2S. “What we were thinking at the time was that OPC was really hot, but what’s next? We thought that as Moore’s Law shrinks the sizes of features down more and more, even e-beam which at the time was considered absolutely accurate—nobody can touch e-beam with accuracy—we thought that even that was going to become a problem. We didn’t know exactly what node so we didn’t predict exactly the speed at which it happened, which got accelerated because of the need for complex features. EUV and these other alternatives technologies were predicted to happen by now, but it hasn’t happened. And because of that need for e-beam accuracy and e-beam’s set of tricks, it turned out to be earlier than we had predicted. We thought looking forward that e-beam was going to be the next frontier.”

In 2007, while they were still an internal unit at Cadence, the team came up with a technology idea that blended the design aspect and the manufacturing aspect together in what they called design for e-beam (DFEB) for a direct write application.

In order to do that, a suite of core technologies had to be developed having to do with simulating e-beam because, he said. “When you are trying to do direct write you have to do it at a one-to-one dimension, whereas when you are writing masks you do it at a four-to-one dimension. So if you are trying to write 30nm on the wafer, on the mask it’s 120nm, but with the direct-write way you have to do it at 30nm and that’s tough. So for wafer direct write, the era of needing e-beam simulation comes much, much earlier because of the one-to-one dimensions. Because we had to deal with that, we developed a bunch of technologies in-house to do simulation.”

By the time D2S spun out in 2007, the team had started to develop a portfolio of simulation techniques that did just that.

Another interesting aspect of D2S technology is that it runs on GPU-based machines because GPUs are especially good at these kinds of accelerations, Fujimura explained—particularly for lithography because of the image processing aspect of it, but even more particularly for e-beam because of the increase in convolution.

Elusive productivity
Looking ahead, Mentor Graphics’ Bhatara pointed out another area emerging as a very, very strong focus throughout the industry—albeit a little difficult to quantify and not entirely tangible—is really about solving the issue of productivity. “This includes development cycle times and associated costs. Because of the number of tools, the amount of complexity that’s going into these flows is shooting up and your engineering resources don’t necessarily grow exponentially. The industry is unwilling to tolerate development cycle times that would increase, so a lot of the onus is really borne by us from a software standpoint working closely with our customers and partners and tackling this issue of managing productivity needs.”



Leave a Reply


(Note: This name will be displayed publicly)