Unsolved Issues In Next-Gen Photomasks

New technologies and data formats will be required below 3nm.


Experts at the Table: Semiconductor Engineering sat down to discuss optical and EUV photomasks issues, as well as the challenges facing the mask business, with Naoya Hayashi, research fellow at DNP; Peter Buck, director of MPC & mask defect management at Siemens Digital Industries Software; Bryan Kasprowicz, senior director of technical strategy at Hoya; and Aki Fujimura, CEO of D2S. What follows are excerpts of that conversation. To view part one of this discussion, click here. Part two is here.

Naoya Hayashi, research fellow at DNP; Bryan Kasprowicz, senior director of technical strategy at Hoya; Peter Buck, director of MPC & mask defect management at Siemens Digital Industries Software; and Aki Fujimura, CEO of D2S.
(L-R): Hayashi, Kasprowicz, Buck, Fujimura.

SE: For EUV masks, does the industry need to develop high-k, phase-change, and binary masks? And does the industry really need to support all of those EUV mask types?

Hayashi: A high-k mask is much like a binary mask. It can improve the mask 3D effects in the binary mask area. If we can use high-k material, we can make a thinner absorber to improve the mask 3D effects. This dates back years, to when we were talking about high-k materials for purposes like that. We are also looking at phase-shift materials for the same purpose.

Fujimura: The first time you deploy high-NA EUV, resolution enhancement techniques (RETs) won’t be needed as much as 0.33 NA EUV or 193i needs it. But, they are going to use the techniques that already are proven to have worked in previous lithography technologies. New lithography solutions are extremely expensive and very difficult to research and develop. So there’s probably two, three, or even four nodes where you have to figure on RETs being able to get you to the next node. And as you begin using high-NA, more and more of these RET technologies are going to get deployed.

SE: We’ve been hearing about inverse lithography technology (ILT) and curvilinear masks for years. What is that, and what will the impact be?

Fujimura: ILT is still OPC, but it’s kind of the advanced version. ILT computes in the pixel space, whereas OPC computes the edge space. OPC manipulates the mask shapes to create a particular wafer target. OPC manipulates the mask shapes by moving edges, initially in a rectilinear fashion, and by now may be more sophisticated to include 45-degree angles, or maybe even more than that. Luminescent is the company that pioneered ILT technology. They started about 20 years ago. They noticed that by being in a pixel space, they could compute which shapes on the mask would be effective at producing the best wafer, and which have the most resilience to manufacturing variation. It turns out that the ideal shapes are curvilinear. But at the time that Luminescent invented ILT, it was not practical to manufacture curvilinear masks. It took a week to write it with variable-shape beam machines. It wasn’t until the advent of multi-beam that it became practical. Multi-beam now takes exactly the same amount of time to write any curvilinear shape as compared any kind of Manhattan shape. Now, curvilinear masks are practical. Curvilinear masks always were known to be better for wafer quality. You need a solution on the mask to be curvilinear in order for it to provide the most effective correction. So curvilinear ILT is needed. Curvilinear ILT basically is different from conventional OPC in that the mask shapes are predominantly curvilinear everywhere, as opposed to combinations of Manhattan shapes.

Fig. 1: Curvilinear shapes on mask. Source: D2S

SE: Today, ILT is used for hotspots, right? And just to put this in perspective, the ultimate goal is full-chip ILT, right?

Buck: OPC had hotspots that were unsolvable through conventional OPC. This allows us to use this more advanced, more rigorous solution in areas where it really mattered — and in a way where it was affordable. But now, the industry is considering full-chip ILT to take advantage of the process window afforded by curvilinear mask shapes on a full chip. And that’s becoming more possible as the computational engines improve, and as it’s more customary to use many, many more CPU or GPUs cores in manufacturing.

Hayashi: A multi-beam mask writer is a good tool to write a curvilinear mask, but the curvilinear feature database still has a huge volume. That’s still a concern for mask making. People are now talking about a new data format and pattern conversion algorithms. Still, there room to improve that. And from a mask maker’s standpoint, the challenges are inspection and repair. So maybe we need a new kind image inspection technology to improve defect quality or printability assurances.

Buck: The industry is looking into alternate methods to represent curvilinear data instead of the traditional piece-wise linear polygon approach. And the focus is pretty much converged on spline-based approaches. So there’s been a lot of work, and there’s a SEMI task force that is actively working on this today. One of the challenges is a question of conversion between the piece-wise linear and a spline-based approach. Do we need to find a way to avoid those conversions? One of the potential approaches is to keep the data as a spline-based format throughout the entire processing stage to avoid those conversion errors. It’s still early days on that, but this may be necessary to minimize or eliminate conversion errors and still maintain the file size advantages of the spline-based representation.

SE: In 2019, the industry formed a new data format working group under SEMI to address the need for curvilinear data representation. Today, the industry is looking at four approaches or possible standards here — quadratic Bezier, B-spline, polygon simplification, and curvature-based fragmentation (CBF). Any thoughts on ILT and what impact the new file format will have?

Kasprowicz: The new file format will help dramatically reduce the volume. The question is, what about the data volumes as 0.55 NA EUV is being implemented. Given the half-field nature of this, the question is whether the data volume will become a bit more manageable. At 0.55 NA, the resolution is smaller and there’s more features here.

Fujimura: When you get to more advanced nodes, each advanced node has a higher requirement for precision. That, in turn, requires a change in the pixel size you have to compute in. It definitely will get harder as you go to more advanced nodes.

Kasprowicz: Going from 2nm to 1.4nm, you expect the volumes to go up, anyway. But it’s because you’re adding more features from ILT or OPC. One concern is the integration strategy by the end user. It’s unlikely there’s a plan to use the same field on a full mask. You probably wouldn’t use two of the same fields unless you think you might have a yield issue in mask making.

Fujimura: From a format perspective, there are principally three ways to represent something. One is the raster domain, which is pixels. Then, there’s polygon-based, which is set of vertices. That’s the traditional CAD mechanism for manipulating any shape, but special cases are usually rectangles or triangles. The third is a curvilinear expression such as Bezier and B-spline, and that’s a format Peter’s committee is working on. Typically, there are two ways to evaluate all of this. One is how do you store this? And two is how do you compute? What’s convenient for each is very different. When you have pixel-based representation of data, it has exactly the same features that multi-beam writing has. And, in fact, multi-beam writers store pixels, and their datapath is based on pixels. Eventually, you’re going to send instructions for each of the pixels. And how it computes inside the machine has to be pixels. But how it is represented in the data going into the machine is another story. If it is pixels, it would be way too big. It would be hundreds of gigabytes, or even terabytes. You could compress it, but it would need to be lossless, which limits the amount of compression. And trying to represent the raster data in a file format to store or to send from one program to another is just not going to happen.

SE: What are the other issues?

Fujimura: And then, there’s the polygon-based data, which has the advantage that the existing infrastructure all works that way. So, that existing infrastructure works in polygons. And if you think about inspection or metrology or any of these things, the data that they’re looking at is all pixel data. But what they’re comparing to would be CAD data. Sometimes, it will be a polygon representation, but right now it is never in a curvilinear format. The issue, though, is it’s not easy for OPC to natively compute in a curvilinear geometry like Bezier or B-spline. It’s difficult for computing to inherently do that. So, it’ll typically compute in pixel space and then output these formats. That tradeoff is what will determine what ends up being the practical solution.

SE: Has the SEMI group reached a decision in terms of reaching a new data format for ILT?

Buck: We are converging on several approaches. It’s not necessary to have only one, and new formats don’t come up that often. And it’s important to have a good balance between building in enough flexibility so the format is forward-looking and will be used for another several decades, perhaps before it needs to be modified because there’s a lot of effort that goes into supporting a new format. But at the same time, we’re don’t want to put in extraneous complexity that becomes a burden for everyone who has to support it. So, the task force now is considering the different approaches, trying to narrow them down to the ones that are most likely to be used. I believe we’ll have a timeline by the end of this year, to the extent that’s possible. It’s a little bit of a cumbersome task to develop a consensus in the industry and then translate that into a format people will agree to, and then to allow companies time to support it. After a slow start, where there was a lot of digestion that needed to be done, the task force seems to be moving forward decisively. In the coming next months, we’ll have some ideas that we can start to focus on, and do testing on, as a potential draft of the format.

Related Stories

The Quest For Curvilinear Photomasks

Leave a Reply

(Note: This name will be displayed publicly)