More Pain In More Places

First of two parts: As complexity continues to mount, so does the focus on cutting costs and getting designs to market quicker. What’s the solution?

popularity

Pain is nothing new in to the semiconductor industry. In fact, the pain of getting complex designs completed on budget, and finding the bugs in those designs, has been responsible for decades of continuous growth in EDA, IP, test, packaging, and foundries.

But going forward there is change afoot in every segment of the flow from architecture to design to layout to verification to manufacturing—and even post-silicon debug. The progression to 16/14nm, with 10nm in the works, is creating concern everywhere. Record numbers of engineers filled DVCon this month to learn more about new verification options, with candidly critical comments surfacing in some of them. EUV , which was supposed to be ready at 45nm, has slipped again, most likely until the 10nm node, leaving design teams with multi-patterning issues. Interconnects have become a challenge, along with thermal density, power budgets, contention for memory due to cache coherency, all underscored by the need for lower power and security.

Viewed individually, each of these problems is business as usual in a difficult engineering field. Taken together, they are almost mind-boggling in compounded complexity, representing both cause for celebration about new opportunities and for alarm in some segments—with plenty of uncertainty about which belongs where. Chip architects on on the winning side. 2.5D and 3D packaging has passed the PowerPoint stage in direct relation to this pain, with test chips now rolling out for characterization.  There is far more engineering going into power regulation and more optimal use of memory, as well. And materials scientists are in demand again. Research into high-mobility materials and new substrates has taken on new urgency.

In his keynote address at CDNLive this week—Cadence’s user conference—company president and CEO  said that system companies are moving into SoCs, board and system design in response to a growing opportunity in mobile, the cloud and the Internet of Things. But these companies—Amazon, Apple, Google, Lenovo, Microsoft and Samsung—also are moving much of this focus from chip to system and back again. While this appears to be good for tools and IP vendors, the level of pain being felt primarily by fabless semiconductor makers also is causing changes across the supply chain, making it harder to compete on cost and time-to-market schedules for generic sockets.

“Power, area and cost are the real issues,” said Tan. “There is also software verification and time-to-market pressure.”

Trying new things
As with any difficult transition, companies are much more willing to try new approaches when they see a wall approaching quickly. Complexity in design, and just the sheer number of components that need to be considered in advanced designs, has raised interest in moving up the abstraction ladder. Software-driven verification is a case in point. While certainly not new—hardware has been used for at least some verification for years—it has suddenly kicked into gear in the past 12 to 18 months as a must-have approach.

“More of the verification and validation flow is software driven, rather than software simulation,” said Bernard Murphy, chief technology officer at Atrenta. “When you look at Verification and validation on large SoCs, it’s increasingly difficult to defend why you spend a lot of time building test benches when the real issue is, ‘Does the software run?’ When you run test benches on RTL, you’re not getting close to reasonable . And you’re not just verifying the design. You’re also characterizing the design for power. If you were to dump everything on an emulator it would slow to a crawl. But you can do power profiling based on the software running.”

He said there are some new ideas about how to approach this, including raising the abstraction levels where data is not completely accurate but still statistically relevant. “Whatever solutions come out, people are looking at novel new ways of dealing with this.”

Cleaning up old things
Perhaps the most obvious solution is to clean up the design process itself—an approach that gets lost in the quest for new solutions. Squeezing more efficiency out of the existing process can have significant repercussions throughout the design process.

“There are two components to this,” said Gal Hasson, Synopsys’ senior director of marketing for RTL synthesis and test. “One is the efficiency in the design itself. The second is the design process. For the last couple of years we’ve been paying a lot of attention to designs that push the limits at new process technologies. But we’re also seeing designs at established nodes asking for similar things—130nm, 160nm. Instead of four metal layers they now want two, or they want to put in more logic and functionality without adding to the die size.”

Rather than putting the brakes on progress, chipmakers are putting the brakes on moving to the next process node. Even at the leading edge, companies are hanging back at 28nm where there are options for super low power processes, fully depleted silicon-on-insulator substrates, all of which can be done with 193nm immersion lithography and single-patterned photomasks.

“Some companies are still rushing forward, but others are staying where they are and getting more from the process they’re on,” said Hasson. “What many companies find interesting and appealing is that they can save 10% on area even with their old net list. So they have less area, less leakage, and no negative impact on timing and frequency—and they get their design out faster.”

Even at advanced nodes, there is room for improvement.

“First-pass silicon and smart engineering are the answers,” said Cadence’s Tan. “I’ve seen charts showing new chips will cost $250 million, depending on the audience. I have started companies doing 14/16nm and spending less than $15 million.”

Spreading the responsibility
The emphasis on smart engineering is particularly interesting, because it throws the onus back on chipmakers to rethink how they structure their development process. While chipmakers have always been quick to blame tool makers, the reality is that change is needed on all fronts. Tools need to deal with some of the more advanced problems encountered by chipmakers, and there is work under way to deal with that.

“If you look at finFETs, they can run at 700 millivolts versus 1 volt for planar transistors, but there is less headroom for noise,” said Aveek Sarkar, vice president of product engineering and support at ANSYS/Apache. “The problem is that with sign-off coverage, you have to deal with power as a statistical problem with a global aspect, so one vector will not solve everything. The questions that need to be asked are how many different scenarios do you need, and which vectors are meaningful. Signoff coverage is a big issue.”

That leads to another, which is whether doing things quickly will lead to the right answers. As Atrenta’s Murphy noted, statistically significant is important in some cases, but for the overall chip what’s considered good enough coverage? With finFETs, wires are narrower and there are new approaches required, because physical issues such as electrostatic discharge to electromigration move from second-order effects to first-order effects.

“This is a learning process and a chip integration problem,” said Sarkar. “We need to reduce noise margin, improve sign-off coverage, and we need to resolve all of this faster and more reliably.”

And all of this has to be done for less money and in less time—particularly in established markets.

“There’s a cost attached to complexity,” said , a Cadence fellow. “The cost comes to the front of the line if the rate of evolution of a platform comes down, which is exactly what we’re seeing with cell phones. The rate of evolution has decelerated, which has put more attention on cost. What if you can get 99% of the functionality for half the price? That’s what China Inc. has been very good at. There is relatively little science in cost reduction.”

But part of this also falls back on the chipmakers, which need to add more flexibility into their flows and silos. The only way to do that is with CEO/CFO/CTO buy-in, because as cost becomes a primary focus it has to be understood as part of the overall design process, not just within an engineering group. This is particularly true when it comes to 2.5D stacked die, where the manufacturing cost may be higher due to the cost of an interposer or packaging, but the overall design cost may be lower and yield may be significantly better.



2 comments

Richard Trauben says:

Electrostatic discharge is serious issue thats getting attention.
Reverse biased ESD diode connections to the supply requires
a lower resistance path into the supply grid. The current thru the
diode is non-trivial and diode via farm resistance marginal. This
is reportedly leading to unexpected opens from iocells to supply
and substantial chip yield loss at leading switch vendors in
narrow design rules.

[…] part one of this series, the focus was on overlapping and new pain points in the semiconductor flow, from initial […]

Leave a Reply


(Note: This name will be displayed publicly)