Evolving lithography demands are challenging mask writing technology, and the shift to curvilinear is happening.
Experts at the Table: Semiconductor Engineering sat down to discuss the current state and future direction of mask-making, with Harry Levinson, principal lithographer at HJL Lithography; Aki Fujimura, CEO of D2S; Ezequiel Russell, senior director of mask technology at Micron; and Christopher Progler, executive vice president and CTO at Photronics. What follows are excerpts of that conversation. To view part one of this discussion, click here.
L-R: HJL’s Levinson; D2S’ Fujimura; Micron’s Russell; Photoronics’ Progler.
SE: Non-EUV nodes, such as 193i immersion, are still evolving. What key innovations are keeping this technology viable and extending its lifetime?
Progler: The big innovation is the use of curvilinear masks — more complex mask shapes that take advantage of what today’s writers can do. With multi-beam mask writers, you can now fabricate very intricate shapes on a mask that weren’t practical before. That’s been a real enabler. The second thing is the stronger use of computational tools in the mask design flow. Tools like mask process correction (MPC) and advanced simulation now let you predict outcomes much more effectively. That reduces the need for costly experimentation and allows you to push the limits of the technology.
Levinson: Curvilinear is a real benefit, and we’re seeing it in two areas. First, at the leading edge — especially with inverse lithography — you’re getting curvilinear features that push immersion lithography to incredible extremes. Second, there’s a lot of interest in applying curvilinear features to less advanced nodes. You can use older patterning technology to build chips with good performance and lower cost. If you embed curvilinear features into the design, you can significantly improve device performance. In fact, even 200mm fabs, where there’s no plan to ever install an immersion scanner, can effectively advance a node just by using curvy masks. That opens the door to improvements without needing new exposure equipment.
Fujimura: Curvilinear is the key. The evolution from variable shape beam (VSB) writers to multi-beam has made curvilinear mask shapes practical. You now can write curvy patterns without increasing write time or cost. It’s important to remember that curvilinear shapes are actually simpler from a manufacturing perspective. Manhattan shapes with 90-degree corners are impossible to replicate exactly in the real world. You’re always approximating. But with curvilinear shapes, you can design patterns that are actually manufacturable. This means you now can have ILT that outputs desired mask shapes and expect those shapes to be exactly what you’re going to get on the mask. Since ILT computes the masks shapes in order to create the best process windows on the wafer, if you don’t get the desired mask shapes on the physical masks, the wafer simulations performed by ILT were inaccurate. With ILT outputting manufacturable curvilinear shapes as target mask shapes, it is possible now to expect the physical mask shapes to match the target mask shapes, and have less mask process variation in the manufactured masks. This was never possible with Manhattan geometries.
Russell: There has been continuous improvement across the entire mask flow — data prep, OPC, source-mask optimization, and simulation. Modern algorithms let you explore different scenarios and push the illumination setup very aggressively. You can fine-tune illumination for very specific patterns, which improves printability and margin. Our modeling has also gotten better. We’re now even using machine learning to capture the parts of the process that we can’t model with purely physics-based models, like certain etch effects. In some cases, you also can apply hybrid OPC strategies, using curvilinear shapes only in localized areas where they matter most, and simpler patterns elsewhere. That gives you the benefit without the full computational load. Altogether, these advances — both in computation and manufacturing — have really extended the life of immersion lithography.
Progler: The front-end computational tools, including machine learning, are incredibly powerful now. We can predict lithography outcomes far better than we could just a few years ago. That’s been a huge enabler for 193i. These tools let us make better masks and improve wafer patterning without needing to touch the hardware.
SE: We know curvilinear masks are the future, but what are the remaining barriers to adoption in the mask shop? And will high-NA make curvilinear demand even greater?
Levinson: There’s still a huge amount of infrastructure that needs to be developed. When everything is described in terms of rectangles, the complexity is relatively low. A rectangle has a length and width. It’s easy to define and easy to adjust. If you want to change a rectangle’s critical dimension by 5%, you just adjust the width. But with curvilinear shapes, there’s no obvious ‘critical dimension’ in the same intuitive sense. You’re dealing with splines or Bézier curves, and changing a shape means adjusting multiple parameters. This complexity flows into everything — mask layout, process correction, rule checks — all of which were built on the assumption of Manhattan geometries. It all has to be rethought. I don’t see any showstoppers, though. There are a lot of smart people in our industry, and they’ll figure it out. But it’s going to take time. Also, we don’t yet have a clear roadmap for doing multi-patterning with curvilinear features in logic. High-NA could help with that by enabling single-patterning curvilinear layouts, where otherwise it would be extremely difficult.
Fujimura: These challenges are all solvable, but right now curvilinear is still treated as an exception, not the norm. That changes the economics and infrastructure. For example, GPU-based computing is what you really need for curvilinear. But most mask shops still rely on CPU-based workflows. Until GPU resources are regularly available, curvilinear will remain a special-case scenario. Metrology is another issue. With Manhattan features, you’re often measuring 1D widths and spaces. With curvilinear, you have to analyze entire shapes and compare edges, so your metrology tools and algorithms need to evolve. This is all going to happen, but it’s a transition. And like Harry said, it takes time to rebuild an industry’s infrastructure from the ground up.
Russell: I want to stress that curvilinear adoption doesn’t have to be all-or-nothing. At Micron, we’ve used curves on masks for years, even before multi-beam writers. It’s not a binary switch. You can selectively apply curvilinear features in places where they bring the most benefit. For example, you could use them just for assist features and keep main features rectangular, so the rest of your flow — inspection, metrology, repair — stays simpler. You also can localize curvilinear shapes to areas where you’re hitting OPC convergence issues or violating mask rule constraints. In those cases, small, piecewise linear corrections can give you ‘curvy-like’ results without a full-field curvilinear mask. But the end-to-end infrastructure still has gaps. We now have a standard file format for describing curvilinear shapes, but not all EDA tools support it natively. Many convert the file into an internal format, process it, and then output it again — introducing potential errors at each step.
Progler: Metrology is probably one of the top impediments right now. You’re no longer measuring just widths and spaces. You’re measuring full 2D contours. That requires much higher resolution, a lot more data points, and faster measurement tools. Building accurate simulation models that feed the optimization algorithms all rely on good metrology. But today, it’s hard to generate the volume and density of shape data we need — hundreds of thousands of points just to build a decent model. The reality is that our ability to write curvilinear patterns on a mask now far exceeds our ability to measure and verify those patterns. That’s a reversal of where we were a few years ago, and it puts the pressure on the metrology side. Also, curvilinear workflows need to be faster. If we want this technology to be usable by a broad set of customers — not just at high volume, leading edge — we need to shorten turnaround time and improve time to yield. That means faster computation, faster measurement, and tighter integration across the whole design-to-mask flow.
SE: Let’s dig a little deeper into pellicles. What’s the current state of EUV pellicle performance, and what can be done to extend their durability and usefulness?
Fujimura: For 193i masks, pellicles are completely accepted. They last a long time, they’re robust, and they’re part of the standard flow. But for EUV, it’s a different story. The masks are reflective, so the energy gets lost going through the pellicle twice — once going in and once coming back out. That’s a big loss for wafers per hour. The other issue is durability. EUV pellicles don’t last nearly as long. You have to replace them regularly — sometimes weekly. And every time you do, it adds cost and complexity, because the mask needs to be inspected again to make sure there’s no damage. So it’s a very expensive, time-consuming process. But even with those issues, the industry still finds it worthwhile because of the benefits EUV brings.
Russell: For memory applications, where you often have some redundancy built into the design, the cost of using a pellicle outweighs the benefits. If a defect occurs and it can be repaired, then it’s better to skip the pellicle and avoid the throughput penalty. That’s why we don’t use pellicles at Micron. Eventually, pellicle performance might improve to the point where it makes sense for certain layers. For example, some logic layers — like GPUs or high-end processors — have large die sizes and are more sensitive to killer defects. In those cases, a single defect could destroy the entire die, and that’s very costly. So the economics are different. Scanner improvements also change the equation. As the scanner’s defect adder rate continues to improve, the benefit of using a pellicle becomes even less clear. If the reticle stays clean, what’s the point of adding a pellicle that hurts throughput?
SE: Does a pellicle impact the lifetime of the mask itself?
Russell: Yes, definitely. When you go without a pellicle, you have to clean the mask more often. And every cleaning degrades the absorber layer on the mask a little bit. There’s been a lot of engineering effort over the years to tune the clean chemistries and processes so they’re effective but still gentle on the mask. But over time, those cleanings do reduce the mask’s usable life. It’s a tradeoff between minimizing downtime and extending mask longevity.
Progler: We know today that EUV repel rates are not on par with optical, so there is work to do here. Advanced coatings can be integrated with EUV pellicle materials to increase their lifetime, and driving up transmission generally will reduce thermal damage. Handling and maintenance of EUV pellicles can surely improve. But there is another way to look at this. Due to the nature of EUV wavelengths, the materials and longevity of EUV pellicles will always be concerns. So we better get good at replacing them cost-effectively and rapidly, and include in the mask cost model the need to have multiple masks in circulation for high running devices so the re-pellicle process does not disrupt production.
Levinson: It’s really a balancing act. On one side you have pellicle suppliers working to improve transmission and durability. On the other, ASML is reducing contamination risks in the scanner, which helps mitigate the need for pellicles. Whether to use a pellicle depends entirely on your use case. Mike Lercel from ASML wrote a great paper a few years ago outlining this. If you’re producing large, high-value logic chips — like an 800mm² GPU — you probably want to use a pellicle. But for smaller memory devices, especially those with built-in redundancy, it makes sense to go without. If pellicles ever reach very high transmission rates and solve the deep UV reflectivity problem, we’ll likely see broader adoption.
Russell: Right now, the poly-type pellicles reflect deep UV light back to the scanner, so you need a filter called a DGL membrane to block it. That adds another 20% throughput loss on top of the normal losses. That’s a pretty serious hit. But researchers are working on alternatives, like carbon nanotube pellicles. These don’t have the same DUV reflectivity issue and have higher baseline transmission. They could eliminate the need for a DGL membrane entirely. But today, they still have issues. For example, they typically last fewer than 10,000 wafer exposures. And if they fail, they don’t just degrade. They shatter into tiny pieces inside the scanner. That’s a major downtime event. I don’t know of anyone using them in high-volume production yet, but there’s a lot of active research in that space.
Leave a Reply