Tennant’s Law

Will multibeam technology save direct-write ebeam lithography? Not likely.

popularity

It’s hard to make things small.  It’s even harder to make things small cheaply.

I was recently re-reading Tim Brunner’s wonderful paper from 2003, “Why optical lithography will live forever” [1] when I was reminded of Tennant’s Law [2,3].  Don Tennant spent 27 years working in lithography-related fields at Bell Labs, and has been running the Cornell NanoScale Science and Technology Facility (CNF) for the last five.  In 1999 he plotted up an interesting trend for direct-write-like lithography technologies:  There is a power-law relationship between areal throughput (the area of a wafer that can be printed per unit time) and the resolution that can be obtained.  Putting resolution (R) in nm and areal throughput (At) in nm2/s, his empirically observed relationship looks like this:

At = 4.3 R5

Even though the proportionality constant (4.3) represents a snapshot of technology capability circa 1995, this is not a good trend.  When cutting the resolution in half (at a given level of technology capability), the throughput decreases by a factor of 32.  Yikes.  That is not good for manufacturing.

What’s behind Tennant’s Law, and is there any way around it?  The first and most obvious problem with direct-write lithography is the pixel problem.  Defining one pixel element as the resolution squared, a constant rate of writing pixels will lead to a throughput that goes as R2.  In this scenario, we always get an areal throughput hit when improving resolution just because we are increasing the number of pixels we have to write.  Dramatic increases in pixel writing speed must accompany resolution improvement just to keep the throughput constant.

But Tennant’s Law shows us that we don’t keep the pixel writing rate constant.  In fact, the pixel throughput (At/R2) goes as R3.  In other words, writing a small pixel takes much longer than writing a big pixel.  Why?  While the answer depends on the specific direct-write technology, there are two general reasons.  First, the sensitivity of the photoresist goes down as the resolution improves.  For electron-beam lithography, higher resolution comes from using a higher energy (at least to a point), since higher-energy electrons exhibit less forward scattering, and thus less blurring within the resist.  But higher-energy electrons also transfer less energy to the resist, thus lowering resist sensitivity.  The relationship is fundamental:  scattering, the mechanism that allows an electron to impart energy to the photoresist, also causes a blurring of the image and a loss of resolution.  Thus, reducing the blurring to improve resolution necessarily results in lower sensitivity and thus lower throughput.

(As an aside, higher electron energy results in greater backscattering, so there is a limit to how far resolution can be improved by going to higher energy.)

Chemically amplified (CA) resists have their own throughput versus resolution trade-off.  CA resists can be made more sensitive by increasing the amount of baking done after exposure.  But this necessarily results in a longer diffusion length of the reactive species (the acid generated by exposure).  The greater sensitivity comes from one acid (the result of exposure) diffusing around and finding multiple polymer sites to react with, thus “amplifying” the effects of exposure and improving sensitivity.  But increased diffusion worsens resolution – the diffusion length must be kept smaller than the feature size in order to form a feature.

Charged particle beam systems have another throughput/resolution problem:  like charges repel.  Cranking up the current to get more electrons to the resist faster (that is, increasing the electron flux) crowds the electrons together, increasing the amount of electron-electron repulsion and blurring the resulting image.  These space-charge effects ultimately doomed the otherwise intriguing SCALPEL projection e-beam lithography approach [4].

The second reason that smaller pixels require more write time has to do with the greater precision required when writing a small pixel.  Since lithography control requirements scale as the feature size (a typical specification for linewidth control is ±10%), one can’t simply write a smaller pixel with the same level of care as a larger one.  And it’s hard to be careful and fast at the same time.

One reason why smaller pixels are harder to control is the stochastic effects of exposure:  as you decrease the number of electrons (or photons) per pixel, the statistical uncertainty in the number of electrons or photons actually used goes up.  The uncertainty produces linewidth errors, most readily observed as linewidth roughness (LWR).  To combat the growing uncertainty in smaller pixels, a higher dose is required.

Other throughput limiters can also come into play for direct-write lithography, such as the data rate (one must be able to supply the information as to which pixels are on or off at a rate at least as fast as the pixel writing rate), or stage motion speed.  But assuming that these limiters can be swept away with good engineering, Tennant’s Law still leaves us with two important dilemmas:  as we improve resolution we are forced to write more pixels, and the time to write each pixel increases.

For proponents of direct-write lithography, the solution to its throughput problems lies with multiple beams.  Setting aside the immense engineering challenges involved with controlling hundreds or thousands of beams to a manufacturing level of precision and reliability, does a multiple-beam approach really get us around Tennant’s Law?  Not easily.  We still have the same two problems.  Every IC technology node increases the number of pixels that need to be written by a factor of 2 over the previous node, necessitating a machine with at least twice the number of beams.  But since each smaller pixel takes longer to write, the real increase in the number of beams is likely to be much larger (more likely a factor of 4 rather than 2).  Even if the economics of multi-beam lithography can be made to work for one technology node, it will look very bad for the next technology node.  In other words, writing one pixel at a time does not scale well, even when using multiple beams.

In a future post, I’ll talk about why Tennant’s Law has not been a factor in optical lithography – until now.

[1]  T. A. Brunner, “Why optical lithography will live forever”, JVST B 21(6), p. 2632 (2003).

[2]  Donald M. Tennant, Chapter 4, “Limits of Conventional Lithography”, in Nanotechnology, Gregory Timp Ed., Springer (1999) p. 164.

[3]  Not to be confused with Roy Tennant’s Law of Library Science:  “Only librarians like to search, everyone else likes to find.”

[4] J.A. Liddle, et al., “Space-charge effects in projection electron-beam lithography: Results from the SCALPEL proof-of-lithography system”, JVST B 19(2), p. 476 (2001).



1 comments

memister says:

It may be related to increasing energy density for lower resolution. The energy of a photon is inversely proportional to the wavelength. So the amount of energy going into a square nanometer is inversely proportional to the wavelength.

Leave a Reply


(Note: This name will be displayed publicly)