DTCO/STCO Create Path For Faster Yield Ramps

A holistic approach can improve reliability and reduce defects, but it has to start early in the design cycle.

popularity

Higher density in planar SoCs and advanced packages, coupled with more complex interactions and dependencies between various components, are permitting systematic defects to escape traditional detection methods. These issues increasingly are not detected until the chips reach high-volume manufacturing, slowing the yield ramp and bumping up costs.

To combat these problems, IDMs and systems companies increasingly are looking to design technology co-optimization (DTCO) and its system-level cousin, system-technology co-optimization (STCO). Both now permeate design, wafer and package fabrication and assembly, and even long-term field use, where they can be used to shorten the time it takes to bring reliable chips to market.

DTCO and STCO are all about rearranging gates, source/drains, and contacts to further shrink cells in finFETs, gate-all-around FETs, and at some point in the future, compound FETs (CFETs), which are stacked pMOS and nMOS transistors. Following that, they will be used to enable 2D materials, whose properties and requirement will likely force big changes in chip architectures.

While the underlying design goals of better performance, power, and area/cost remain the same as always, the methodologies to enable those goals have changed significantly as technology evolves. But to minimize issues, co-optimization strategies must begin on day one of next-generation technology development. “We’ve been working with DTCO for something like…well, forever,” says Kenneth Larsen, senior director of 3D IC product development at Synopsys. “What’s really changing is this idea of taking this to the next level, including the package, routings, and using all available options for resolving bottlenecks in the system. We can also uncover new opportunities for improving performance, power delivery, yield, etc., so STCO expands the scope of where we came from with DTCO.”

While DTCO tackles all the intricacies of design through chip fabrication, in STCO the chips or chiplets already are fabricated. That allows architects and engineers to zero in on the best ways of arranging chips, selecting the right die-to-die interfaces and protocols, selecting the right interposer/substrates, and learning how to test these multi-chiplet systems in the context of low pin availability and other constraints. “Right from the start it gets into, for instance, warpage as one issue among thousands of issues, bringing together the whole gamut of challenges that emerge when you are bringing up much larger systems,” says Larsen.


Fig. 1: System co-optimization hierarchy. Source: Photo from IEDM 2022/Intel Corp.

Fruits of co-optimization
DTCO/STCO takes a holistic view of how devices interact with each other, and how they meet multiple requirements simultaneously, and it drives chipmakers to look for new ways of building devices. One of the most dramatic examples where this technology played a leading role is backside power delivery (BPD), in which signal paths are decoupled from the power delivery network and optimized on the front side of the wafer. Power and ground signals move to the wafer’s back side to more efficiently deliver power to the transistors via a power rail and nano-TSVs.

BPD came about due to increasing signal congestion and the high cost of double or triple patterning with EUV when fabricating the first-level interconnects. When the signal interconnects reached unacceptable voltage drop as power traversed long interconnect paths through 15 or more layers on the chip, multiple options were explored as possible solutions. Leading chipmakers and foundries eventually settled on backside power delivery as the best path forward.

Likewise, DTCO was a key factor in transitioning from planar to finFET transistors. And the engineering lessons learned from finFETs became enablers for gate-all-around nanosheet transistors, as well as for future forksheet transistors and CFETs.

Less dramatic, but no less essential transitions enabled by DTCO include the co-design of circuit layouts with lithography and etching capabilities using methods like optical proximity correction (OPC) and self-aligned patterning, all of which are geared toward packing more functionality into a smaller footprint, but in a manner that improves yield by reducing edge placement errors from one patterning level to another.

Humble beginnings
DTCO and STCO really began gaining traction when traditional scaling approaches started to run out of steam. Co-optimizing both design and technology, and system and technology, system architects can eke more benefits out of the technology than traditional scaling approaches provided. Siloed design and process steps needed to evolve into cross-functional teams, with widespread partnering that is now considered essential to moving semiconductors forward.

In the early years of semiconductor manufacturing, chipmaking primarily followed a one-way path from circuit design to chip fabrication, with what today looks like fairly simple handoffs from physical design to mask synthesis, mask writing, lithography optimization, fab processing optimization, inspection, metrology, and testing of single-chip packages assembled on printed circuit boards, and then into systems. Into the 1990s and 2000s, there was increasing reliance on one module’s success, such as lithography’s ability to print the physical design rules and guard banding, for instance.

There are now many more processes, and more synergies between them, which explains why there has been a succession of methodologies and tools to enable design for manufacturing (DFM), design for testing (DFT), and design for reliability (DFR) or aging. As chips become more complex, and as they are increasingly are used in safety- and mission-critical applications, what’s defined as a defect becomes more workload- and context-driven. It depends on device type and use conditions, and it requires much better characterization and testing to drive to lower defective parts per million or even DPPB.

This is where DTCO fits into the picture. “Traditionally, DTCO was about co-optimizing the cell layout, but it didn’t stop there,” said Norman Chang, an Ansys fellow. “For advanced technologies, an excellent example is TSMC’s FinFlex, which uses two fins on the top and one fin on the bottom.”

As critical as design tech co-optimization is to bringing any new architectural innovation to yield entitlement, chipmakers also are turning to its more comprehensive cousin, system technology co-optimization to ensure the manufacturability of chips, and reliable packaging, testing, and field usage, especially considering aging factors. “For STCO we are looking at the design from the system perspective in terms of the power, performance and area, but also looking at multi-physics effects at the physical architecture level,” says Chang. “This is especially the case for chiplet-based systems that can contain some 30 chiplets, as in Intel’s Ponte Vecchio and AMD’s Instinct MI300 platform. The architect needs to look at the constraints from different angles of the design right from the beginning, and that includes power integrity, signal integrity, thermal and stress integrity. This is a whole new ballgame.”

With DTCO and STCO, semiconductor fabs can reduce cost and time-to-market in advanced process development, but it is also considered essential to preventing catastrophic hits to yield.

“For foundries like TSMC, Intel, and Samsung, yield is a very big concern,” said Tianhao Zhang, global head, RD Foundry Support at Ansys. “They try to do early analysis as much as possible, because if they find any kind of a yield issue or integration issues at the later stages, it’s often too late and costly to address.”

Zhang added that multi-physics analysis, which like DTCO is not new to the industry, must be performed early to account for interdependent factors in chips and systems. And even though the industry has always dealt with mechanical, thermal and electrical stresses, as systems become more compact, problems tend to multiply. “Thermal issues couple into the power, and power couples into stress, and stress couples into signal integrity,” said Zhang.

Testing and yield: New nodes, new fabs, new yield ramps
Inspection, metrology, and test, all of which are used to bolster yields, are the areas where cost concerns are felt most acutely. Cross-functional and cross-vendor collaborations are becoming essential, and will prove even more important moving forward.

“Collaborative activity needs to happen when moving to the next node, or even the next generation device,” said Nitza Basoco, technology and business strategist at Teradyne. “Companies aim to meet the highest quality level in the shortest amount of time with the lowest cost of test. Even as devices get more complex, cost targets keep shrinking and are difficult to achieve without collaboration cross-functionally between test development, product engineering and design. Analyzing what can be done with existing test strategies and techniques, and new and existing instrumentation, ultimately leads to determining if new test IP will be required to be inserted into a device to ensure a successful bring-up, characterization, and release to production. Visionary planning, like adding circuitry for the next generation into today’s devices for validation, is key to success.”

It’s also applicable to the faster feedback cycles and learning that are happening in ATE, where edge computing and machine learning analytics are being applied. “Today, if people are running analytics, they’re running it on what we refer to as the host controller,” said Ken Butler, senior director of business development at Advantest. “That’s the computer that sits next to the tester. While the tester is running the test operation and sequencing through the tests, a second edge computer is capable of much larger workloads to provide near-real-time feedback for adaptive testing.”

Testing and retesting most efficiently also affect yield. “We already have algorithms that, for instance, can estimate accurately what we call the retest recovery rate, said Dieter Rathei, CEO of DR Yield. “So you test the wafer and then you say, ‘Okay, this wafer has a certain yield, but it has seven failures.’ And you know from history if you retest those seven failures, the devices might recover. So now the question is whether it’s worthwhile to retest that particular wafer, because it costs about X dollars to retest the wafer and to gain about Y dollars for potentially additional devices that may recover from the retest. So if Y is significantly higher than X, then it makes sense to retest, but often the engineers don’t have that information to make the financial decision on the fly.”

This is where DTCO comes in. “The faster you can learn, the more data you have available, and that’s just a cycle of positive reinforcement,” said Rathei. “It is helpful also to use smaller lot sizes in the beginning, approximately 5 to 10 wafers instead of 25, which also makes it twice to four times as fast as processing the whole lot.”

Conclusion
DTCO leads to better mileage for lithography, wafer processing, testing, yield and reliability. STCO is doing the same for making the best choices in chiplet arrangement, substrate selection, material selection and system-level testing. The industry could not produce reliable chips and systems with acceptable yield without these co-optimizations. Going forward, DTCO and STCO are likely to enable even more innovative structures and combinations, joining backside power delivery, transistor stacking through hybrid bonding, in-memory computing, and other possibilities the industry has yet to imagine.



Leave a Reply


(Note: This name will be displayed publicly)