Manufacturing limitations and challenges increasingly will dictate what happens in IC design; more collaboration will be required across the supply chain.
By Ed Sperling
Fabs and foundries frequently have been the savior of flawed designs, fixing problems such as power and performance, identifying design issues and often developing solutions to those problems.
Over the next couple of process nodes, and in stacked die that will span multiple processes, there will be far fewer saves coming from the back end. Double and triple patterning, stress effects, new materials and the laws of physics are forcing a change in direction. In fact, for the first time design teams will have to make up for a slew of changes and challenges on the manufacturing and packaging side, employing new methodologies, new tools and deeper levels of expertise.
In a keynote speech at the SEMI Industry Strategy Symposium last week, Applied Materials chairman and CEO Mike Splinter sounded the alarm over the changes ahead. “Change is accelerating,” said Splinter. “Compared with the last 15 years, the next five years will have more changes and more inflection points. And it’s not just about complexity. It’s happening at the foundational level of how an IC is made.”
He’s not alone in that assessment. Bernie Meyerson, an IBM fellow, said CMOS is now in “the end game.” While CMOS certainly isn’t going away, there are physical limits for what can be done to extend it. That has spawned extensive research into alternative materials such as silicon on insulator and graphene, new elements for insulation, as well as new structures such as FinFETs and carbon nanotube FETs.
So what does this mean for design at advanced nodes? Lots more work on design for manufacturability, more complexity in achieving the same kinds of boosts in performance and energy efficiency that were taken for granted at older nodes, and much more up-front checking of just about everything.
“From 40nm to 28nm to 20nm, the number of checks for physical verification will grow by leaps and bounds,” said Michael White, director of product marketing for Calibre. “There are almost 1,000 more DRC checks from 40nm to 28nm between early production and volume production. We are also capturing additional context-dependent yield detractors. For example, historically we have had spacing checks. Now we have spacing checks and we need to check all of the other geometries in the neighborhood, including lithography and fill issues. Those are extra constraints.”
Lithography used to be something design teams never had to consider. But the delay in EUV will require double patterning at 22/20nm and potentially even triple patterning of at least some portions of the chip at 14nm. This becomes particularly challenging for design teams, because one of the approaches under serious consideration is something called spacer-assisted double patterning. In simple terms, a polygon design may look nothing like what’s on the mask using SAPD. This is akin to driving a car in reverse using the rearview mirror where nothing that appears in the mirror resembles the road.
Stacking effects
One solution to these issues is stacking of die, whether in 2.5D or 3D configurations. The so-called “More Than Moore” approach bundles technologies together at nodes that make sense for a particular function, rather than trying to fit everything into the most advanced process. So while the logic or memory may be created at 22nm or 14nm, for example, analog may be developed at 130nm.
This all makes sense in theory, but it also adds a new dimension of complexity that ripples back and forth between the design and the manufacturing worlds. It also exposes the entire supply chain into the design process, because problems detected anywhere along the chain can affect multiple other areas—and it’s possible that no single segment can solve them alone.
“Over the next three to five years chips will go vertical,” said Naveed Sherwani, CEO of Open-Silicon. “The question is how we are going to put together 3D ICs and what will go into them. There is a lot that needs to be done in this area.”
Sherwani contends that tools and methodologies should make it easier and quicker to do derivative designs. That’s the goal, and at least part of the solution involves companies learning to use the tools they have more effectively, and to apply some discipline to their methodologies. It’s easy to get blinded by the number of permutations and choices from the growing complexity.
“As process geometries continue to get smaller and the amount of IP used increases, the complexity of the design process becomes a major issue, which puts pressure on the entire development team from a coordination and communication standpoint,” said Simon Butler, CEO of Methodics “Also, with software elements and power constants, which are really just other types of IP, added to the already very complex mix of things, design teams need better ways to manage the entire SoC development process and synchronize all the moving parts. Internal design organizations already struggle with managing remote design teams. Now, with a disaggregated design chain consisting of separate companies, the need for real-time collaboration and managed data exchange is critical.”
That sentiment is echoed across the industry. Frank Schirrmeister, senior director for the Cadence System Development Suite, said that in principal tools allow engineers to model almost everything they need. “This isn’t a tool problem. It’s a discipline problem. But the other side of this is that in 1993 logic synthesis was pretty simple. Twelve years later, the whole process is not longer understandable by any engineer.”
Margin call
One of the most effective ways to deal with unknowns in the past is guard-banding—the process of building extra safeguards into ICs. That worked until about 65nm, but at advanced nodes it can cause performance degradations or drain batteries more quickly, or both.
“The guard band for synthesis is a smaller percentage at 28nm and it’s even smaller at 20nm,” said Jack Browne, senior vice president of sales and marketing at Sonics. “So you’ve got to be able to interoperate with the right guys. We’re all trying to manage a horrible amount of complexity and simplify it. The problem is there is too much that’s new and not enough experience points so that people can make the safe choices. There are significant unknowns on everyone’s road map.”
One potential solution—and one that’s being considered by a number of large chip and IP companies—is to harden everything into pre-qualified, pre-verified subsystems. While this limits the number of permutations, it does take some of the risk out of using those blocks. But too many hardened subsystems also can limit the ability of companies to differentiate their designs. And while that works well at a company like Apple, it does not work so well at a chip company trying to sell technology to Apple’s competitors.
“With subsystems you’ve closed the black box and given up the chance to turn some of the dials,” Browne said. “We’re seeing this with the TI OMAP team, which has accumulated a significant number of libraries and with Broadcom. And Toshiba has created video and RF subsystems.”
Caution ahead
All of these issues have raised questions about what needs to be fixed in the design flow, what needs to be extended, and how this will unfold over time. The reality is that changes may be slow because there is serious uncertainty about exactly what problems will erupt, where and when.
“There’s always a risk of getting too far ahead with the tools,” said Steve Smith, senior director of platform marketing at Synopsys. “We will add capabilities to current tools to make them 3D aware, but the goal is to enable engineers to do what they do best. We’re already dealing with multicorner, multimode design, and 3D will be another dimension. We might have coupling effects and we certainly will have a challenge with temperature. But most of the processes are familiar, and changing things in a working flow is always risky.”
Leave a Reply