Why the middle of line is now a major problem.
Klaus Schuegraf, vice president of new products and solutions at PDF Solutions, explains why variability is a growing challenge at advanced nodes, why middle of line is now one of the big problem areas, and what happens when a via is misaligned due to a small process variation.
100% inspection, more data, and traceability will reduce assembly defects plaguing automotive customer returns.
Engineers are finding ways to effectively thermally dissipate heat from complex modules.
Increased transistor density and utilization are creating memory performance issues.
Tech and auto giants are putting even more pressure on the semiconductor labor market. Some say it could be just what the industry needs.
There are at least three architectural layers to processor design, each of which plays a significant role.
Suppliers are investing new 300mm capacity, but it’s probably not enough. And despite burgeoning 200mm demand, only Okmetic and new players in China are adding capacity.
100% inspection, more data, and traceability will reduce assembly defects plaguing automotive customer returns.
From low resistance vias to buried power rails, it takes multiple strategies to usher in 2nm chips.
Some of the less common considerations for assessing the suitability of a system for high-performance workloads.
Manufacturing 3D structures will require atomic-level control of what’s removed and what stays on a wafer.
Different interconnect standards and packaging options being readied for mass chiplet adoption.
Engineers are finding ways to effectively thermally dissipate heat from complex modules.
Disaggregation and the wind-down of Moore’s Law have changed everything.
I dare say that design rule “tolerances” might have been scaled prematurely hoping all could be reduced in a new process shrink. Gross oversimplification, BUT crying wolf as “its not working how we thought without probing / characterizing before telling “design rules” committed to might have been optimistic.
Some of the actual design rules curiously might have pattern ?density (local stress?) dependence, not as simple as in larger processes.
Claiming not knowing the sources of the observed variability needs hard data from (design rule) “probing” ?test structures and experiments to pull out the sources and relative error / tolerance magnitudes. We are talking small as each NM is 5 atoms. Just materials “run out” over a 12 inch wafer is somewhat compensated by image field stepper patterning but hardly completely at nm level.
If the via “post” style contacts miss the bottom (target region) of the physical contact, why was this not noticed in Process Development? Was test structure coverage in process characterization too weak, too simplistic?
Sometimes one can only hammer so far on increasing process precision, without rather large yield cost implications. The controversy might be exaggerated, viewed as if everything is solvable by some process fix. It might require design fixes to be pragmatic and expedient albeit sounds like a lot of device designs might have been prematurely committed.
Reminds me of a grad student project at an IVY league mems research team, trying to make lateral reversible mems relay contacts with a sputtering process down high aspect hole and a simple process tool for deposition, when the easy fix might have been with a different architecture ( vertical contacts between actuator on bonded wafer pair )
Oversimplifying a bit. Sometimes the reality is viewed overly complex.
Characterizing design rules early and robustly (with broad case coverage) matters even if boring to some in R&D / TD.