Photoresist Problems Ahead

Without fundamental photoresist research Moore’s Law scaling faces another threat below 10nm.


As the semiconductor industry begins its ramp to manufacturing at 10nm and below, activity is heating up involving lithography modeling. The goal is to be ready when all the pieces of the puzzle are in place. That includes EUV, when it finally becomes commercially viable, as well as extending ArF .

When it comes to lithography modeling below 10nm, John Sturtevant, director of product development for modeling and verification at Mentor Graphics assured that, “We’re going to do it, and we’ll do it successfully. But the specific path will be largely dictated by what process technology and lithography methodologies are used in manufacturing below 10nm. That continues to be hotly debated.”

He noted that at the recent EUV symposium in Washington, D.C., the sentiment was more upbeat and optimistic than it was at the SPIE Conference in February. “There’s little doubt when you talk about 10nm that EUV is not going to be there, and there still seems to be a considerable doubt whether at 7nm EUV will be there. From our perspective, we’ve been assuming that someday EUV is going to be there and we’re fast approaching that point where the amount of money that’s been spent on this is going to dictate that we all are, if by nothing more than conscience, going to be required to do EUV just because of what we’ve spent on it. So there’s no doubt that that will be part of it, and that turns out to be a fairly straightforward extension of what we’ve already done. There are a couple of new things that we have to account for, but we as an industry are ready to do that.”

In the interim, though, the workhorse will be multi-patterning and conventional optical lithography. “There are some things that will be necessary even if, let’s say, 7nm is going to be 100% optical immersion. There are new facets to patterning modeling that we need to take into consideration. Another option is directed self-assembly (DSA), and we’ve been working on that for a couple of years. It seems pretty clear that’s not going to happen at 7nm.”

There is also an increasing need to model failure modes. He pointed out that for a long time, computational lithography was all about predicting the critical dimension at one vertical plane of, say, the photoresist, but with increasing failure modes and increasing use of new polarity, negative tone-developed photoresist systems (heavily in use at 14 and 10nm), there are all sorts of new failure modes that affect yield and those have to be modeled. That’s a big challenge that will only continue.

“The whole reason we end up with models that only predict at one plane is a computational necessity, and at the end of the day, you’re trying to make a mask that is a pseudo 2D object and you can only have that degree of freedom in the X,Y plane for the most part,” Sturtevant said. “You get what you get on the wafer, but these 3D effects on the wafer are becoming more and more complex. This drives more computational complexity so the models we’re developing will handle that and can predict full 3D profiles anywhere, full chip. The neat thing is, this is driving new optimization algorithms and new science in the area of model optimization where, in the past, we would say if this is going to be used in a mask synthesis flow, you have to be able to have 24-hour turnaround time. That’s always been the golden standard. What’s happening there? We’re going from tens to hundreds to thousands, and soon tens of thousands, of CPUs that are used to do that job, and now with cloud-based computing that’s very cost effective. We’re looking at using those same computational methods to come up with more complex models that 10 years ago we said there’s no way you can account for that because it’s too computationally expensive. It works synergistically that the very things that we need to account for now are enabled by virtue of the advanced technology.”

Manoj Chacko, product marketing director at Cadence, agreed. “Lithography modeling is one piece of an island at 10nm because when looking at OPC in its full context, there’s more than that at 10nm and all these things have to work together to make a successful chip. At 10nm, we are 20 times smaller than the wavelength of the exposure light. With all the RET and the high NA and k1, it’s all good, but the bottom line is the gap is seriously big and there are different things needed here. Of course, lithography modeling has to take care to do good predictability for 10nm, and that means factors that are introduced at 10nm. One of them is mask 3D effects. These impact that accuracy of the model predictions. To account for this, rigorous simulation is something seen.”

The “very rigorous lithography modeling area,” as it is called, can take a single transistor and do a rigorous simulation, but it takes many hours to understand the properties and go through the various steps of the flow. Cadence does not participate here.

Chacko explained the other area of lithography modeling is on the manufacturing side — OPC — the whole flow where a fab or foundry gets a GDS and goes all the way to make a mask, and then takes it to wafer, and so on. Here, Cadence has been developing an OPC repair capability using topological pattern analysis.

As far as what has to happen with OPC modeling going forward, Tom Ferry, senior director of marketing for the silicon engineering group at Synopsys, observed that there are a couple of factors in play. “First thing is the way we are going to get to 14, 10 and 7nm, at least based on what we know today, is going to be using the same lithography equipment that we’ve been using for a while—ArF immersion—and obviously that’s been extended many generations beyond where it was targeted. So that’s going to be one challenge. Another challenge is just the normal scaling. Every time you move down the scale things that used to be negligible are no longer negligible and so on. In addition, to get everything to work from a process point of view, people are looking at a lot of new things — new materials, new devices. All that sums up to mean that models have been running out of steam.”

One of the big challenges is to make models more predictive, which in theory is one of the benefits of raising the abstraction level. “Traditionally, you knew the box you were playing in and you could characterize your model within that box and it would be accurate within that box. But now with these factors, it’s not clear that you’ll always have a case on lithography where you’re going to be working within that box,” Ferry said.

He pointed to Synopsys’ compact model technology to create physically relevant parameters in the model. The goal is to replace measurements on test wafers with equations that are most important for modeling.

More research needed
Lithography expert Chris Mack observed that at least in the rigorous modeling that’s been happening, companies are heading in the right direction. “They’ve already been working on stochastic models so they can predict line-edge roughness for some time, and those models are getting better. There’s still a lot of fundamental work that needs to be done for the models to be good enough, but they are moving in the right direction.”

To make the models ‘good enough,’ more fundamental science research needs to be done at the molecular level, he said. “We have very expensive tools, very expensive scanners, and everyone understands that we need to understand the physics of imaging and how this works very, very well, and we invest a lot there. On the resist side, resist development remains more empirical and more proprietary so that resist companies develop their new materials without discussing the details of what’s inside those materials. If the resist company has a strong understanding of how they work, they are not explaining it. They’re not letting anyone else know. That poses a problem for people who want to develop models that are accurate at the molecular level, or to be more precise, at the mesoscopic level — the level of the sub-10nm. You need to understand what’s happening at the 1nm kind of level. It’s very, very difficult because there’s so much about the photoresist that remains a black box. If there is fundamental research going on within the resist companies, they’re not talking about it.”

This lack of fundamental research is not only going to be a barrier below 10nm, it’s a barrier today, Mack asserted. “We’ve got photoresists that we want to hope will someday be able to work at 20 millijoule per square centimeter exposure doses that provide high resolution and low line-edge roughness. And yet they don’t, and nobody knows for sure what the limits are. How low can we go? How low can we make the sensitivity of the resist? How low can we make the line-edge roughness for a sensitive resist? We’re left to debate in non-scientific ways where the future of EUV lithography is going to go.”

At every stage of research into a new lithography technology, he noted that there’s a place for commercial companies to be involved, and there’s a place for dedicated research organizations—either universities or research consortia—to be working on this problem. His view is that there has been far too little university involvement. The bulk of the research work on photoresists boils down to a very select group of professors from the University of Texas at Austin, Georgia Tech, SUNY Albany and Osaka University.

However, Mack explained, universities that do research in photoresist chemistries are hindered by the fact that they do not have the expertise to actually build a commercially viable photoresist, and therefore, the model systems that they study are not nearly as complicated as some photoresist systems that are out there and they can’t go and just buy a bottle of photoresist and work on it because to do science you have to know what’s in the bottle. “The result is I don’t think we have some of the best university minds working on the problems of understanding at the 1nm scale how our photoresists work stochastically.”

Further, he reminded he is no stranger to the photoresist arena — he’s been focusing on stochastic modeling for resists for 8 years — and has been very disappointed at the rate of progress in the basic understanding of how this works. “It goes beyond photoresist chemistry as well. Our metrology is inadequate…It boils down to a lack of forward-looking funding for this research.” He warned that industry players will pay more later for the data that’s needed or they will live with less.

Looking ahead with OPC
In the realm of the more traditional computational lithography/optical proximity correction (OPC), figuring out how to bring stochastic lithographic behavior into an OPC model is going to be challenging. “I don’t think those groups face the same fundamental challenges that the rigorous modeling world faces,” Mack said. “If the rigorous modeling world can figure out how to describe at a fundamental level what’s going on inside of a resist, I’ll bet that the lumped parameter folks — the computational lithography/OPC kind of folks — will figure out how to use that to solve problems. I’m less worried about their ability to use line-edge roughness as an output of their simulators in the proper way. The OPC folks are very good at coming up with solutions. But if there’s no fundamental understanding, then their abilities to drive practical solutions for their customers is very limited.”

Mack said a likely scenario is that EUV will show up on the market in the next couple years with a marginally adequate light source that should be able to print 20 wafers an hour using a perfect photoresist, but the photoresist won’t live up to expectations. Long term for the industry, Mack believes 28nm is the last good node. “There might be someplace for 20 and 16 for some higher-end products, where the cost per transistor is higher but the transistors are better with lower power, and for those applications it’s worthwhile. But I think there is a very reasonable chance we’ll see lowest cost per transistor remain at the 28nm node. It’s going to be interesting to see what happens at 14/16 because we have finFETs available to the mass market at probably a reasonable price but it will be higher cost per transistor.”

The real question is 10nm. “10nm is make or break. If we can’t get back on track with the lower cost per transistor at 10nm then we will find  over from at least that sense. It’s not that innovation will end, but the old way of making progress will come to a halt. This is what I think is a very realistic scenario — that 10nm will not result in a lower cost per transistor compared to previous nodes, and the result will be a rethinking of what progress in the semiconductor industry actually means,” Mack concluded.


guest says:

The best time to introduce EUV was 28 nm, and that time is long gone. Fortunately, the main investors (Intel, TSMC, Samsung) could afford billions of dollars to throw away.

Grump says:

The world spent a lot of money on moon-shots, supersonic airliners, dirigibles, atom bombs, sailing ships and X-ray proximity printing. It does not mean they have to be used in the future. I add EUV to that list of ideas whose time has come and gone. The question is whether the companies dependent on EUV will go as well.

Leave a Reply

(Note: This name will be displayed publicly)