Will GPU-Acceleration Mean The End Of Empirical Mask Models?

Physical mask models are more important than ever, and now are practical to employ.

popularity

Shrinking mask feature sizes and increasing proximity effects are driving the adoption of simulation-based mask processing. Empirical models have been most widely used to date, because they are faster to simulate. Today, GPU-acceleration is enabling fast simulation using physical models. Does the ability of GPU-acceleration to make physical models a practical solution mean the end of empirical mask models?

Physical effects drive the need for physical models
As Moore’s Law marches on, the need for increased process margin, particularly in depth of focus, is driving optical proximity correction (OPC) and inverse lithography technology (ILT) to create more complex mask features. Today’s leading-edge masks – especially the contact layers – include curvilinear mask shapes, usually drawn as complex orthogonal shapes with small jogs. The desired shapes on the masks have sub-60nm, 2D contours. Linearity, corner rounding, and line-end shortening are among the issues that need correction to insure that the actual mask matches the expectations of OPC/ILT. Simulation-based mask correction is required. But, simulations are only as accurate as the models they employ.

There are two kinds of models: empirical and physical. Both types of models are extracted from measured data from test chips. Test-chip data are very time-consuming and expensive to gather, so only an extremely limited sample of test structures can be measured. A model is calibrated against a calibration data set, and then tested against a separate validation data set comprising different structures.

The approach of empirical modeling is to numerically analyze measured data to find correlated patterns in the data, and then to model mathematically the difference between simulated data and measured data. The approach of physical modeling is to base model components on various physical effects driven by principles of physics and chemistry.

Empirical models can be made to simulate faster by fitting them to simpler model forms. However, even if an empirical model does well against the calibration data set (and it may do far better, due to what statisticians call “over fitting”), a physical model is far more likely to be predictive in the tests with the validation data set.

A physical model also is more likely to have one model form that works across many different mask processes. Whether the process uses positive or negative resists, employs transmitting or reflective masks (for EUV), or is written by variable-shaped beam (VSB) mask writers or by multi-beam writers, the same model form can be used with different parameters.

Masks that include complex shapes require 2D validation. A sufficiently accurate 1D model should predict accurately the effect on 2D features. However, today’s mask writing instruments for precision layers use VSB, which is a Manhattan (1D) writing instrument, so any inaccuracies in 1D models are exacerbated when tested against a 2D validation. Physical models are far more likely to be accurate for 2D shapes, and are better for ILT.

Historically, physical models resulted in unacceptably long simulation runtimes. The advent of GPU-accelerated mask simulation has changed this picture. GPU-acceleration is particularly suited to “single operation, multiple data” computing, which makes it a very good fit for simulation of physical phenomena, such as mask simulation. With well-engineered GPU-acceleration, physical model creation is eased, and full-chip mask simulation can be executed within reasonable runtimes.

The end of empirical models?
So, if more accurate physical models are more important than ever, and now are practical to employ, does this mean that we’ll soon see the end of empirical mask models? I don’t think so. Empirical modeling will always be a part of modeling.

Like any engineering technique, empirical models will continue to be used when their level of accuracy is sufficient. There will be mask layers, or mask features, that can be produced with acceptable accuracy using empirical models for a while yet. And, as in any evolving engineering process, there will always be residual effects that remain after all of the currently understood physical models are deployed. While we work to understand those residual effects, the most accurate mask models will deploy empirical fitting elements in the meantime.

Physical models, now made practical through the use of GPU-acceleration, will be required more and more frequently as Moore’s Law continues. When choosing a simulation-based solution, it’s important to understand what kinds of models are used, and what effects/materials/processes are modeled. Also important is the reference used to calibrate the models. A good model will have a model form that can fit all mask processes, including EUV masks, both positive and negative resists, and will perform equally well for masks written by VSB and by multi-beam writers.



Leave a Reply


(Note: This name will be displayed publicly)