Speeding Up Metrology At Advanced Nodes

Demand for higher reliability requires more advanced and historically slow equipment and methodologies, but improvements are on the way.


Experts at the Table: Semiconductor Engineering sat down to talk metrology at the most advanced nodes and the impact of using different substrates, with Frank Chen, director of applications and product management at Bruker Nano Surfaces & Metrology; John Hoffman, computer vision engineering manager at Nordson Test & Measurement; and Jiangtao Hu, senior technology director at Onto Innovation. What follows are excerpts of that conversation. To view part 1 of the discussion, click here.

L-R: Nordson’s Hoffman; Bruker’s Chen; Onto Innovation’s Hu.

SE: How much progress has been made toward speeding up metrology during manufacturing?

Hu: There are a few aspects to consider when it comes to integrating more metrology inline. Traditionally, many technologies like FTIR (Fourier-transform infrared spectroscopy) or acoustic were primarily geared toward offline monitoring. However, there’s a growing trend to incorporate these into more inline measurements. I believe there are several drivers for this shift. One is the necessity for 100% inspection, especially in advanced packaging. Another aspect is the increasing use of AI in edge computing, like analysis performed in cars. There’s a strong push from the automobile industry for improved safety and traceability in AI chips, which calls for more inline measurements. We’re definitely seeing an increase in inline metrology.

Now, there are some interesting challenges. How do you achieve full wafer coverage with metrology, which is inherently slow? We’re working very hard to improve the speed for as much wafer coverage as possible. But there are other directions to explore. For instance, borrowing the hotspot concept from front-end OPC. Instead of looking at the entire wafer, you focus only on hotspots. We also leverage analysis software, like yield analysis. Traditionally, you might look at inspection data to see if a defect will have an impact. But as processes become more complicated, you need to link across multiple metrology and inspection steps. Sometimes, even without full wafer measurement, we can work out within-wafer trend variations, and then use AI and analytical technologies to pinpoint corner cases and identify regions likely to have issues. That is another way to reduce the demand for sampling.

Hoffman: When you scale down these features by a factor of two, you’re essentially quadrupling the number of pixels required in an imaging system. This, in turn, quadruples the amount of data, leading to a host of physics problems and data analysis challenges to sort through. That’s our challenge, right? As these processes shrink, how do we continue to develop products that allow us to perform the measurements our customers need? Certain technologies can only be miniaturized so much. Beyond that limit, we need to come up with something completely different. That’s why we’re using electron microscopes in certain applications and visible light microscopes in others, just to give a simple example. The challenge is to adapt and extend our technology stack as far as possible as the feature sizes continue to decrease.

Chen: It’s important to add a bit of nuance here regarding what ‘inline’ actually means. It’s useful to consider it from the perspective of the cost of inspection or metrology, and this depends greatly on the application. What’s the end use-case, the criticality of the failures, and the costs associated with those failures? These factors lead to different evaluations. For instance, if introducing a technology costs a significant percentage of the final selling price, it might not be justifiable as an inline solution unless it drastically improves yield or there’s another compelling reason to invest in it. In such cases, you would optimize your sampling rates.

I’d like to emphasize that this is highly dependent on the application and the industry. What’s the ideal sampling rate? In R&D and ramp-up phases, especially when dealing with low volumes, it might be prudent to oversample to ensure coverage. This allows you to make an educated decision later about the ideal technology and sampling rate, and how much you can afford in terms of inspection costs, so you can reduce them at that point. In some industries, you may not want to reduce these costs at all. The cost of failure and the headache of dealing with escalations and resolving issues for months might not be worth the savings from reducing inspection cost.

SE: How does metrology technology adapt to new and different materials, like glass substrates?

Chen: In some cases, depending on the substrate choice, it may enable or disqualify other technologies. An example is verifying alignment during TCB die attach. If the substrate was quartz or a transparent material, they can just check the alignment through the backside optically. However, for silicon, this would not be possible, so it’s interesting that some substrate choices would enable or disqualify certain technologies.

Hoffman: I would second that. All of these technologies assume certain physics. When you start changing out substrates, will the physics still work? And can you recover? A lot of our algorithms make certain assumptions about the physics that are going on. Will those algorithms still work, or do we have to come up with brand new algorithms because the physics has changed? That’s the challenge that everyone has to work through.

Hu: The introduction of new materials in 3D packaging, both on the wafer side and on the substrate, is significant. However, if you look at the variation in these materials, it’s actually less compared to some other industries. For instance, in optical devices or power devices, there’s a great variety of substrates used. Those materials are often glass or transparent. For example, a glass substrate is often used in optical devices. Similarly, in power devices, materials like silicon carbide are common, and they’re often transparent, as well.

We’ve seen most of these variations, and they’re not particularly surprising. The bigger challenge lies in the form factor of the material. It may not always be in the shape or form of a traditional wafer, and this can pose significant challenges in terms of handling capability. From our perspective, the physics part of the measurement is not a very big challenge. But handling these materials might be more challenging, especially considering different customers may have different processes and requirements. How we tailor our approach to meet these specific customer requirements is crucial. In this age of rapid ramp-up, that’s one of the key issues we need to solve. But after this wave, we’re probably going to see quite a standardization.

SE: A key assumption for advanced packaging is that there will be known good die (KGD). But right now, there’s not a standard for what that means. Is that something that you see developing? How are companies going to deal with different vendors that have different testing parameters for their products, and which may not work the same way at the final packaging stage as it worked in the original test?

Chen: That’s a good point. What does ‘known good die’ really mean? What technology are you using to verify it’s a known good die? A lot of it is based on electrical performance, but even with electrical testing, how thorough are you? Are you conducting a lot of reliability and tolerance tests or running various functional programs? What exactly does ‘known good die’ mean?

Now, there’s going to be a class of dies that pass the electrical tests but are reliability risks, which is not an uncommon scenario. Even after passing automotive-grade electrical tests, one end-customer decided to cross-section the dies to check the quality. What they found were voids and cracks on every single die. Yes, it’s passing electrical tests, but this raises a reliability concern. And they didn’t catch it with traditional screening techniques. Some of the quality and reliability strategies are still evolving as technology scales up from thousands to orders of magnitude more interconnects. Especially in automotive, which is more on the lagging side but with higher reliability requirements, we need to push for higher standards and qualifications. People are realizing that this is a big problem that needs to be addressed.

Hu: I want to add a bit about the data we generate and what drives this. You brought up the definition of a ‘known good die.’ How do you correlate some of the measurements to the test performance? Another thing people don’t really understand well is the reliability of the die. How do they correlate the measurement, for example, and also the dimensions for TSVs, the coating quality, the uniformity, and the corner of the structure. It might not impact the yield today, but imagine all the dies that pass everything now. Five years down the road, if your TSV has a sharp corner or non-uniform coating, the device may break down. In the past, this might not have been a big issue, but when you’re putting those chips into automobiles or planes, it’s a different story. So beyond density, there’s another aspect the industry is starting to pick up on — the demand for standardization, more traceability, and more measurements, especially for critical steps. This drives more inline metrology for keeping a record for traceability.

Hoffman: One thing to consider is the changing ratios of features. Some of these features are becoming very tall compared to what they were in the past. Additionally, the packing of these features is becoming much denser. This presents a huge challenge for practical inspection, particularly for 3D height measurements. If you can’t see down to the substrate, how do you get down there and get a good measurement, especially as these aspect ratios are getting so big.

As features shrink, it increases data rates and challenges speeds. The trend is to move to more and larger interconnects. The way they achieve this is by shrinking features. So that’s how it all ties in. The shrinking of features is directly linked to the challenges in inspection and measurement as we deal with more and larger interconnects.

SE: We’ve touched a little on the amount of data being generated and the data management needs for analyses. How do you manage that much data and how long do you keep it around?

Chen: Having traceability is important, and there’s also the need for archiving enough data for future access. This is a big concern for our customers. Strategies might be time-dependent. For example, if data is older than a certain amount, you purge it. And then, as John mentioned, moving into summary data as quickly as possible is key. Being able to summarize the data and key aspects makes it feasible for long-term storage. Then there are other considerations, like whether you want to keep compressed images or patch images for future reference. It’s useful for the review process to have some visuals to verify that the tool is running properly and that the measurements look good.

And then there’s the question of how to future-proof it. For example, we have some very good quality datasets and reference data that we may want to utilize in the future. Let’s say we develop some new algorithms or software that can better analyze the data, and then we want to re-use those datasets for training and optimization. How do we store and continue to build our reference database? All those are challenges to address and may require different strategies. Companies that have their own yield management software have an advantage. They already have a lot of the infrastructure in place to collect, store, and visualize the data.

Hoffman: The approach to data storage really is customer-specific. For example, if they’re selling a product with a 20-year guarantee, they better be storing that data for 20 years. But if you’re selling a product with a one-year guarantee, well, you probably don’t need to store it for more than a year. It really depends on the business context in which the customer is operating. And the type of data you need to store may vary over time, as well. You want to ensure that if you’re making guarantees, or if our customers are making guarantees to their customers, that if there’s a problem, they’ll be able to track it down. They have to know they have the right data to track those problems down. So it’s pretty contextual. I don’t think there’s a one-size-fits-all answer.

Hu: Down the road, if you have a chip failure, you should be able to trace back and look into which process and under what conditions it occurred. You don’t want to recall all the chips if you can narrow it down to a few of them. That’s definitely a driver for keeping more data. You can use brute force to save all the data, and some customers actually plan to do that. It might be a good thing for our industry, demanding more storage. We do think some techniques used for cross-step yield analysis and trend analysis are essential to reduce the dimensions, to compress the data for storage, but still maintain enough fidelity — not only to discover the issues that show up today, but maybe even five years down the road. When you try to go back, you still need to have enough information for further discovery of the hidden connections of these dimensions to long term reliability issues.

SE: What are some important areas in test and metrology your customers should be aware of, or need to focus on now?

Chen: I would start with a discussion on the sampling rates to verify how those rates were determined in the first place. It’s important to question deeply whether it was done from a risk management perspective. Did someone actually conduct a risk assessment, or was it just because that’s what the technology was capable of at the time, so we ended up with a legacy recommended sampling rate? I hope there has been a more thorough investigation. Perhaps we need to revisit the topic of what are the available technologies that survey them? Are we exposing ourselves to the right risks?

If it’s an IDM (integrated device manufacturer), then of course you manage all the yields, you reap all the benefits, and you take on the cost. That’s a much easier equation. But for those with a complex supply chain, I hope they’re revisiting how to ensure quality across the chain. I’m sure they’re also tired of the finger-pointing regarding incoming or outgoing quality issues. So how do you verify? Do you have some checkpoints? Do you have some consolidated inspection centers where they’re conducting the tests and inspections?

Hoffman: A problem I am constantly faced with is that we have a system with certain performance characteristics, and there’s a need to establish some sort of ground truth measurement, then rate our system against this ground truth measurement and rate the competitor’s system against it as well. There’s an assumption that many people make, and I’m not pointing any fingers, but it’s a common belief that their ground truth sensor has no bias and is 100% accurate. This is particularly challenging for us because ground truth is often established with sensors that have different physics than our sensor. As a result, there are different biases to consider.

For example, how do you discuss the height of something with a surface texture? If you have a surface texture of a few microns and they want to talk about the height accuracy to a tenth of a micron, you really have to be very careful about how you define height in that context. That’s a subtlety many people aren’t aware of when they first start digging into this. Getting that right is an important part of the evaluation process. So, making people aware that this is even an issue is something we often have to do as we get into the really detailed technical evaluations of products.

Hu: I totally agree with John on that aspect. When you talk with mature customers, they more or less ditch the concept of standardization for absolute accuracy. They understand that process deviation is more important. But when we look at advanced packaging or some specialty markets, there are lots of newcomers who don’t really understand the difference between process variation and absolute accuracy. On one hand, if there’s a standard way to tailor to the need, for example, what’s your definition of a wafer bow or warp? You ask ten people; you get ten different definitions. So, setting up a uniform standard or working with some standard test sample would be very helpful.

On the other side, I want to loop back to the earlier point about coming up with industry standards for AI or the automobile industry. That might be a tough task to unify opinions, but at least having some kind of industry-level trend or identifying important issues and requirements is crucial. The challenge we face is that if you’re talking to a hundred customers, they all have different requirements and perspectives. Some customers adhere to very high safety standards and want things done a certain way. Others are just happy to be able to produce a chip based on their financial situation, but they might not even be aware of, for example, safety requirements or standard requirements. That’s a difficulty, right? We can’t tailor a product to every customer. So, some kind of trend requirement, or at least general education in that area, would be helpful.

SE: What do you see happening in test and metrology over the next year or so?

Hu: I’ll start with the perspective on industry standards. We’ve talked a lot about how customer dependency is creating challenges, right? Everyone is jumping in with their own approach. I would say that some industry standards are likely to emerge. For instance, the automobile industry has its safety regulations, and organizations are starting to look at the bigger picture, coming up with standard guidelines for that area. For example, defining what constitutes a ‘good die’ is a pertinent question. Establishing such standards would be a significant step towards solving some of these major challenges. So a year from now, I hope we can talk more about industry standards, and answer questions like how to store data, how long to store it, whether to store only high-risk data or all data, and what kind of recommendations to follow. Discussing these topics would represent a pretty big development in the field of test and metrology.

Chen: Specifically, for the next year, AI is going to be the key topic. From our perspective, we see a lot of emphasis on comprehensive metrology solutions that cover both the advanced logic chips and HBM.

Leave a Reply

(Note: This name will be displayed publicly)