A new lithography library brings mask optimization operations to GPUs.
There are so many challenges in producing modern semiconductor devices that it’s amazing for the industry to pull it off at all. From the underlying physics to fabrication processes to the development flow, there is no shortage of tough issues to address. Some of the biggest arise in lithography for deep submicron chips. A recent post outlined the major trends in lithography and summarized a few challenges and emerging solutions. This post focuses on another key challenge—the dramatic rise in computing requirements for lithography—and discusses how graphical processing units (GPUs) can help to satisfy the demand.
The source of the increased computing demand lies in compensating for image errors introduced during the lithographic process by diffraction or process effects, which takes longer with increasingly dense chip designs. If left uncorrected, the patterns etched on silicon will not precisely reproduce the shapes drawn by the designers. Corners may be rounded and line widths may be different than intended. The traditional way to handle this is with optical proximity correction (OPC), which adjusts edges and polygons to optimize the etched features and match the design intent as closely as possible.
OPC requires nontrivial amounts of computation, but this is generally not a major concern since segment-based optimization enables parallel processing. The bigger issue is that OPC offers limited degrees of freedom in the complexity of the corrected shapes produced and the techniques used to correct them. In recent years, inverse lithography technology (ILT) has emerged as a more flexible approach. Patterns are converted into pixels so that pixel-based optimization techniques can be used. ILT can handle a much wider range of shapes and patterns, but it requires far more computational power than OPC. Parallel processing is employed extensively, but users report that a single ILT mask can consume more than 10K CPU cores for multiple days.
The demands of computational lithography continue to grow. Every new node means more polygons per mask, advanced processes require more masks, and the shapes used get ever more complex. Given that GPUs provide massive parallelism and have successfully accelerated several other steps in the chip development process, it is natural to wonder whether they can speed up computations for ILT. Users are clear about their desire: computation time of less than one day using reasonable resources. Recent collaborative work between NVIDIA, TSMC, and Synopsys has provided significant evidence that GPUs can help achieve this goal. This work has involved three main transformations of lithography code from CPUs to GPUs:
In the past, improvements in CPU-based algorithms and compute server hardware provided 2-4X speedup for computational lithography. Initial experiments with GPUs in 2020 demonstrated speedup of 10X on ILT simulation functions. As shown in the figure above, subsequent work has found many additional computations, such as polygon and non-image-based operations, to be suitable for GPUs. Some of these operations are used for OPC as well as ILT, demonstrating that GPUs can speed up both types of mask optimizations.
NVIDIA, TSMC, and Synopsys have also co-developed a new GPU lithography library for use in both OPC and ILT. This library features polygon and edge-based geometry algorithms, polygon rasterization, FFT, convolution, and more. Speedup as high as 40X over CPUs has been seen for some types of functions. The overall speedup from CPUs to GPUs in total runtimes for one ILT “recipe” accumulated over several templates was more than 15X. This takes a multi-day CPU run to under one day using fewer parallel machines.
These results are a snapshot in time; computation lithography remains a very active domain for research, development, and deployment. Additional recipes, flows, and functions continue to be enabled for GPUs, AI machine learning (ML) is being increasingly applied, and more effective CPU+GPU co-optimization is advancing. All the latest results will be presented in a session on “Leveraging NIM and GPU Acceleration to Transform Chip Design with AI” at the NVIDIA GTC conference on Friday, March 21, in San Jose, California. Synopsys will also be exhibiting in Booth #222 from March 18 to March 21. Lithography experts will be attending to answer any questions.
Leave a Reply