Why the middle of line is now a major problem.
Klaus Schuegraf, vice president of new products and solutions at PDF Solutions, explains why variability is a growing challenge at advanced nodes, why middle of line is now one of the big problem areas, and what happens when a via is misaligned due to a small process variation.
New interconnects and processes will be required to reach the next process nodes.
Servers today feature one or two x86 chips, or maybe an Arm processor. In 5 or 10 years they will feature many more.
After failing in the fab race, the country has started focusing on less capital-intensive segments.
Rowhammer attack on memory could create significant issues for systems; possible solution emerges.
Gate-all-around FETs will replace finFETs, but the transition will be costly and difficult.
An upbeat industry at the start of the year met one of its biggest challenges, but instead of being a headwind, it quickly turned into a tailwind.
The backbone of computing architecture for 75 years is being supplanted by more efficient, less general compute architectures.
Rising costs, complexity, and fuzzy delivery schedules are casting a cloud over next-gen lithography.
New approaches to preventing counterfeiting across the supply chain.
New interconnects and processes will be required to reach the next process nodes.
Challenges range from constant security updates to expected lifetimes that last beyond the companies that made them.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
I dare say that design rule “tolerances” might have been scaled prematurely hoping all could be reduced in a new process shrink. Gross oversimplification, BUT crying wolf as “its not working how we thought without probing / characterizing before telling “design rules” committed to might have been optimistic.
Some of the actual design rules curiously might have pattern ?density (local stress?) dependence, not as simple as in larger processes.
Claiming not knowing the sources of the observed variability needs hard data from (design rule) “probing” ?test structures and experiments to pull out the sources and relative error / tolerance magnitudes. We are talking small as each NM is 5 atoms. Just materials “run out” over a 12 inch wafer is somewhat compensated by image field stepper patterning but hardly completely at nm level.
If the via “post” style contacts miss the bottom (target region) of the physical contact, why was this not noticed in Process Development? Was test structure coverage in process characterization too weak, too simplistic?
Sometimes one can only hammer so far on increasing process precision, without rather large yield cost implications. The controversy might be exaggerated, viewed as if everything is solvable by some process fix. It might require design fixes to be pragmatic and expedient albeit sounds like a lot of device designs might have been prematurely committed.
Reminds me of a grad student project at an IVY league mems research team, trying to make lateral reversible mems relay contacts with a sputtering process down high aspect hole and a simple process tool for deposition, when the easy fix might have been with a different architecture ( vertical contacts between actuator on bonded wafer pair )
Oversimplifying a bit. Sometimes the reality is viewed overly complex.
Characterizing design rules early and robustly (with broad case coverage) matters even if boring to some in R&D / TD.