Fab processes that enable stacked transistors, hybrid bonding, and advanced packaging are driving the need for more and better measurements.
Metrology and inspection are dealing with a slew of issues tied to 3D measurements, buried defects, and higher sensitivity as device features continue to shrink to 2nm and below.
This is made even more challenging due to increasing pressure to ramp new processes more quickly. Metrology tool suppliers must exceed current needs by a process node or two to ensure solutions are ready to meet tighter market windows for next generation logic, memory, and specialty markets such as power modules and sensors.
Leading-edge fabs also are integrating advanced data analytics platforms for critical measurement and inspection to enhance precision and make the most of data from various sources. “Data is said to be the gold of the 21st century, but really understanding the data is the gold,” said Dieter Rathei, CEO of DR Yield. “What’s happening all around the semiconductor industry is we have far too much data. The value is hidden in the data, and to mine the gold from it, you need tools.”
Rathei said the right predictive analytics platform can help fab engineers improve efficiency, quality, output, and yield through earlier identification of production issues.
Big changes at tiny dimensions
Significant changes will be required in inspection and metrology, as well, as leading-edge designs shift to novel 3D architectures, complementary FETs (CFETs), 3D-ICs based on hybrid bonding, and various types of advanced packaging.
“Metrology and inspection have entered a new era,” said Anne-Laure Charley, R&D manager at imec’s ITF, in a recent presentation. “We are indeed transitioning from a world where metrology was the first step to be reduced or even removed, to a world where it has become a real technology enabler. And we have new challenges in front of us that drive new, innovative approaches.”
Pointing to the massive development effort involved in enabling EUV lithography and metrology to meet the first silicon needs of CFETs, Charley emphasized the need to detect and measure hidden features and defects in 3D structures that are becoming smaller and thinner (see figure 1).
“3D system architectures imply detection of voids buried in metal,” she said. “You have 10nm defects or nanometer-thin layers that need to be characterized. Edge placement error (EPE), which was introduced over 15 years ago, includes the contributors of CD, overlay, OPC, and local and global variability. ASML predicts that 10 years from now, every parameter will have to be controlled below a nanometer.”
Fig. 1: Nanosheet transistors, hybrid bonded and TSVs, and scaled features challenge metrology and inspection tooling. Source: imec
Both optical metrology and SEM-based tools are mainstream and in production today, while X-ray diffraction imaging is meeting specific production needs, including in advanced packaging.
“We have experience with customers using X-ray diffraction imaging with CoWoS (TSMC’s chip on wafer on substrate), where they’re effectively stacking chips on top of each other and then grinding silicon from the substrate, because it is effectively dead mass in the structure,” said John Wall, UK site manager at Bruker. “What they found is that the XRDI technique can detect cracks, edge defects, and multiple problems that can cause the device to fail catastrophically during the back-end process and before packaging.”
More advanced quality control during the fanout process leads to more stringent requirement of chiplets/chips on substrate. White Light Interferometry provides simultaneous metrology of vertical and lateral CDs such via depth, copper or photoresist thickness and overlay between fanout layers. “Metrology is critical for advanced 2.5D packaging. Our capabilities become integrated with the manufacturing process flow at our key customers,” said Samuel Lesko, general manager for Stylus and Optical Metrology at Bruker.
In some cases, buried defects can be detected electrically using e-beam voltage contrast measurements. “If you have an inspection step post-CMP, and you have a buried void under the contact that causes that contact to basically become open, you may not see the void using optical inspection,” explained Indranil De, vice president of engineering of PDF Solutions. “Let’s say it’s a tungsten contact. That tungsten contact is electrically disconnected from the underlying metal because of that buried void, or that contact could be touching another metal line underneath. So it’s causing an electrical short or open that can be detected in manufacturing using voltage contrast inspection. In leading edge die, such as at the 3nm node, or 12 to 14nm feature size, there are 3 contacts per transistor. So the contact layer is the most dense because the number of contacts is 3 times 50 million, or however many transistors are on the die.”
As a result, pre-preparation involves mining the layout for sensitive contact, vias, or metal lines, and then performing VC testing only along those critical pathways.
Beginning at around 2010 with the first 3D devices, scatterometry solidified its place in the process control loop because it can measure structure dimensions that are invisible to top-down approaches, such as re-entrant features and gratings with profiles larger than 90 degrees. Scatterometry, which combines spectroscopic ellipsometry and reflectometry, is so called because feature dimensions and shape are calculated based on scattered light patterns from a periodic array. Lately, mid-infrared scatterometry enable greater contrast between materials with similar optical properties, such as silicon dioxide and silicon nitride dielectrics. In nanosheet transistors, IR scatterometry measures the critical silicon nitride recess, for instance, in a 3D NAND channel.
The technology will become even more important with the introduction of CFET devices that scale by stacking the pMOS and nMOS transistors, somewhere around the 7Å technology node (with an 18nm metal pitch). The sensitivity of scatterometry tools depends on the optical properties between facing materials and the volume of material that the beam interacts with.
“IR scatterometry extends from nanosheet to CFET architectures,” said Nick Keller, director of applications development at Onto Innovation. “And CFETs are an interesting case, because you’re moving up vertically. From an optical standpoint, you’re actually getting more signal because you have more material volume per unit area, so more interaction with the light. But the rub of that is that customers want to extract more parameters. So the challenges may balance out. You’re getting more sensitivity, so more information, but since more parameters are important, there’s potentially more correlation between parameters.”
Others agree. “Scatterometry is a powerful metrology technique, which can extract many parameters of interest,” said imec’s Charley. In addition, correlation of scatterometry method results with reference data from AFM, for instance, can be improved with appropriate machine learning algorithms. “When we introduce machine learning on top of our standard methods, we improve significantly machine-to-reference correlation.”
She noted that machine learning also helps to improve the signal-to-noise ratio of CD-SEM measurements.
Despite such advances, optical inspection may be running out of steam. “The optical inspection process, often considered the workhorse in defect detection, faces limitations in terms of wavelength and resolution. As critical dimensions continue to shrink in advanced nodes, optical inspection is being pushed to its limits. And despite throughput improvements, full die and full wafer e-beam inspection still have a long way to go before they are ready for high volume manufacturing,” said Le Hong, director of fab Solutions, Calibre Semi Solutions at Siemens EDA. Furthermore, optimizing the sensitivity of optical inspection to capture genuine defects while minimizing false/nuisance ones has become increasingly challenging.”
To address these challenges, Hong points to growing demand for software capable of intelligently downsampling from optical to scanning electron microscope (SEM) review, particularly in high nuisance regimes. “This software must also possess the performance required for inline use in HVM. Siemens EDA’s Calibre SONR product offers a cutting-edge solution that leverages AI-driven algorithms for optical to SEM review downsampling. This methodology is not only design and process aware but also boasts performance that is fully inline ready for HVM applications,” said Hong. “The feature-driven downsampling algorithm is well-suited to effectively handle the common occurrence of high nuisance counts during hot scans. Additionally, it demonstrates a remarkable tolerance towards the limited spatial correlation between optical inspection and design. With SONR downsampling, the potential exists for a significant improvement in defect hit rates, averaging 5 times better than the current standard.”
Preparing for hybrid bonding
A number of fabs are exploring which metrology/inspection methods are best used for hybrid bonding prior to and after the bonding process. Hybrid bonding brings together small copper pads (<10 µm) that are slightly recessed in dielectric fields (typically SiCN). White light interferometry, a type of optical profiler, can be used to characterize the CMP edge roll-off at the wafer edge, but it may also be used to measure the copper recess depth prior to bonding.
Phase-shift interferometry (PSI) mode in WLI is used to monitor topography at the wafer level, including copper recess depth. There are strict specifications on recess depth across the wafer. Too little copper can cause opens, while too much can cause copper extension beyond the barrier oxide and potential shorts.
When it comes to measuring copper recess, overlap exists between metrology techniques especially in case of WLI profiler and the other leading method of Atomic Force Microscopy (AFM). While WLI profiler combines 4x throughput with the ability to map millions of copper pads in the same die, AFM provides the exact offset between oxide and copper compensating WLI measurements. AFM also expands in range of scan speed and scan length, covering entire die flatness post-CMP as well as pad recess.
Combining metrology and analytics
One of the greatest concerns for process and yield engineers today is controlling process variability, which affects what goes on inside a wafer, as well as wafer-to-wafer, and lot-to-lot results. In fact, across-wafer signatures from many wafer processes are not uncommon (see figure 2).
“The location of the die on the wafer is paramount to understanding any type of variation that you are seeing, because on a typical wafer, the optimal performing die (considering both performance and power) form a donut shape,” said Nir Sever, senior director of business development at proteanTecs. “Dies at the center and the edge of the wafer behave worse than the rest.”
Fig. 2: Optimal performing dies reside in a donut shaped pattern on a 300mm wafer. Source: proteanTecs
Such timing and power variations can be matched with die-level identifiers. “The fundamental way to tie any telemetry information to where the die is located on the wafer is coming from something we call ULT, or unit-level identifier,” said Sever. “Usually at the end of the wafer sort you are programming an ID of each of die into a non-volatile memory, and from then on you can track the authentication ID to its exact location on the wafer, the wafer number, the lot number, and its manufacturing history.”
Variation in electrical performance of finished die becomes especially important in advanced packaging applications that involve chiplets, such as HBM4 DRAM dies, heterogeneous stacks of SRAM and processors, or any number of chiplet combinations.
Such unique identifiers are common in digital circuits but some analog parts or small discrete devices typically do not have identifiers. Individual die IDs are essential for silicon lifecycle management, tracking device performance from design through manufacturing and usage to end of life. The identifiers also help engineers identify latent failures that can precipitate into hard failures during field use, and to ensure the parts being assembled are traceable.
“Certain problems are introduced with each new technology,” said Jayant D’Souza, principal technical product manager at Siemens Digital Industries Software. “For example, with gate-all-around transistors, the transistor failures we see are more subtle than they were previously. Additionally, the cost for failure analysis and wafer costs itself has been increasing, making every learning cycle cost much more.”
This is particularly evident in the rollout of leading-edge processes. “There are three major new developments happening during yield ramps,” said Matt Knowles, senior director of product management at Synopsys. “First, we see scan chain failures continuing well into production. As the process nodes and the transistor designs get more sophisticated, the process windows are becoming much more sensitive. There are a lot of more design-related defect modes — soft failures that only happen at a certain voltage or certain timing conditions, unlike hard failures. As a result, customers need to pull this design-related information into the analytics platform itself and be able to do these product-level correlations in an automated fashion.”
Knowles said the other two developments are the continuance of scan chain failures well into production ramps, and an escalation in test count, especially with AI chips.
“We see that the scan chain failures are continuing into more mature nodes and into more mature processes,” he said. “It used to be that the scan rate failure was very high during initial ramp, but then after you solved these issues, the numbers went down. In early ramp, you’d have maybe a 60% versus 40% scan chain versus logic chain failure rate, and then you’d beat it down to where scan chain was more like 20% to 30%. But what we’ve heard is that the scan chain failures are continuing. Some of the failures are design-centric and some of them are defect-centric. So customers have to then collect more scan chain failures, and potentially do more chain diagnosis, which requires analysis tools that can collect all that data, analyze all that data, and help them find root cause.”
Knowles also pointed to a rapid rise in test counts. “Especially when you have some of these hyperscaler chips, they have tried to throw so many different types of tests to catch things like silent data corruption that the sheer number of test counts has gone from tens of thousands to maybe hundreds of thousands, and we are preparing for 1 million tests. The sheer volume of data puts enormous pressure on your analytics platform.”
Working the data
The analytics platform associated with the fab’s yield management system (YMS) can provide early warnings for process anomalies, identification of quality-compromised parts, and better insight into production data. “With monitoring rules, our algorithms can predict failures based on early knowledge about deviations or anomalies in the data, essentially allowing manufacturers to act on early warning signs instead of reacting to major manufacturing issues when it’s already too late – thereby avoiding costly production incidents,” said DR Yield’s Rathei. “In addition, our user-friendly data analysis capabilities provide further deep insights for production optimization.”
The demand for software that facilitates design-to-manufacturing yield optimization has witnessed a significant surge in recent years according to Siemens EDA’s Hong. “Foundries, in particular, are focusing on AI-driven process optimization, wafer process golden path discovery, and root cause analysis for design-to-yield limitations. Our Calibre Fab Insight software suite assists foundries in process optimization while providing valuable design insights. Additionally, the Calibre SONR software utilizes machine learning algorithms to decipher the contribution of design parameters to systematic yield-limiting defects. It also automates the generation of defect-avoiding DFM check libraries.”
On the other hand, Hong explains that fabless companies are more inclined towards surpassing the traditional approach of geometric pattern match-based design fixes. They require software that can efficiently extract process-related features per gate at a full chip level. Furthermore, a high-performance ML-based algorithm is needed to enable tunable degrees of fuzzy matching. By combining these capabilities, fabless companies can initiate yield learning as early as the T0 test chip level tapeout and seamlessly extend this learning to the first product chip tapeout.”
Several companies are partnering across the industry to bring vast volumes of data into one platform, or even have two platforms that exchange information, as is the case with PDF Solutions’ FIRE platform and Siemens’ Tessent, helping to address layout sensitivities that cause systematic defects in early ramp phases.
Fig. 3: Flow identifies and separates the random and systematic defects more quickly using root cause convolution (RCD) of PDF Solutions FIRE analytics with Siemens Tessent platform. Source: PDF Solutions
“Defects during this phase can be driven by either process-related root causes or design-related root causes — or both. Volumes scan diagnosis, combined with root cause deconvolution (RCD), creates defect Paretos on the population of failing die,” said Tomasz Brozek, technical fellow at PDF Solutions.
“The root causes modeled by RCD have been successful at finding subtle random and process-related defects. With newer technology nodes, such as 5nm or 3nm, design-related systematic defects continue to contribute to the loss mechanisms well into the manufacturing production,” added Brozek.
The analytics platforms are designed to be metrology-tool agnostic, according to Brad Perkins of Nordson Test & Measurement. “Whether it’s optical, X-ray, or ultrasonic inspection, you’re looking at tighter control limits that are inside of spec limits, and with advanced process control you’re able to start identifying process drift, and that’s really where the real value of the tools is coming in today. It’s not letting escapes get out to the field, which are of course monumentally important when you look at device failures in airbags or autonomous driving.”
“One the image interpretation is done, data exporting is almost machine-agnostic. The data exporting that we’re going to do is obviously a unit-level traceability,” said Perkins. “It could be an individual part on a JEDEC tray. It could be a specific spot on a wafer corresponding to a chip. Different customers are going to look at different things. Usually, it can summarize locations of critical voids, total critical defects, and if a process is starting to drift we can put out alerts directly from the machine or we can work with working with a station controller, MES, SECS-GEM, etc.”
Conclusion
Some of the greatest challenges facing metrology and inspection involve the detection of hidden defects or features in increasingly 3-dimensional structures, both at the front and back end of line. The need for faster yield ramping depends on the early identification of systematic defects, which can be design- or process-related.
This only becomes more complex at new nodes and in advanced packages. But engineering teams can improve their ramp speed with a new wave of data analytics with machine learning, which can help identify problems faster and with greater insights into what can and has gone wrong and why.
Related Reading
Metrology And Inspection For The Chiplet Era
Recent developments address imminent needs of advanced nodes and packages, but not all the pieces are in place yet.
Backside Power Delivery Gears Up For 2nm Devices
But this novel approach to optimizing logic performance depends on advancing lithography, etching, polishing, and bonding processes.
Leave a Reply