Can EDA Keep Growing?

Analysis: If there are fewer companies at the leading edge, what does that mean for the tools vendors? There are no simple answers, but it’s not all bad.

popularity

Slower progress at the leading edge of process technology, coupled with rising costs and fewer design starts, are changing the economics of the EDA world. Not surprisingly, there is almost a direct correlation between the shrinking number of startups in the field and the number of customers working on the most advanced nodes.

So what exactly does this mean for the EDA world? Big changes, for sure, but those changes are as complex and intertwined—and therefore fuzzy—as the issues in building chips. For tools companies, the impact will vary depending upon a number of factors:

  • Where they are in the food chain;
  • Whether new markets such as the Internet of Things live up to their hype;
  • Whether there is enough synergy with adjacent markets to bolster growth;
  • Whether the shrinking number of chipmakers at the leading edge will pay more for tools;
  • Whether there is enough business at older nodes to compensate for fewer customers at the most advanced nodes.

There are a lot of factors in this equation, and they are the cause of much discussion, as well as acquisitions that reach far beyond the bounds of EDA, and a rethinking of relationships across the supply chain.

Location, location, location
While there will always be a need for physical part of design—place and route and layout, for example—the real action these days is in accelerating verification, software, and increasingly in being able to make choices faster. This is why high-level synthesis has become a booming market after more than a decade and why ESL tools and formal verification have gone mainstream after years of promise.

Those are relatively well-established markets, though, even though many of them have taken years to realize their potential. A growing opportunity is cutting through the complexity in the first place, and there are multiple methods for attacking that problem and multiple places to do it.

One of the big changes here is that even older process nodes now require EDA tools. While much of this work was done by hand at 90nm and above, mainstream is now between 55nm and 45nm. At 45nm, there are thermal, electromigration and ESD issues to contend with, and far too many interactions for the human mind to fully understand without automation. And at 28nm, there are complex routing, signal integrity and bandwidth considerations to throw into the mix, as well as more complex IP integration, more choices about memories and I/O.

“There’s a lot of speculation and discussion, and it’s always driven by ROI,” said Robert Hoogenstryd, senior director of marketing for design, analysis and signoff at Synopsys. “We see a bit of consolidation and a concentrated set of customers racing toward 16/14/10nm, but we’re seeing others at 28/40/65nm. The opportunity is that as customers try to stay at current nodes they want to squeeze out as much performance and power savings, so they are adopting the latest and greatest add-on capabilities for advanced nodes at existing nodes. We’re seeing that in signoff for leakage power. We’re even seeing it in physical layout and routing, where they’re looking for free spaces to do timing ECO. We expected everyone to adopt this at 16/14, but they’re also using it at 28/40/65 to reduce the die area.”

If that continues, the number of chipmakers using EDA tools will explode. At least in theory, that means the number of tools sold will increase. Whether it actually works out that way is another matter, and it has raised blood pressure for executives at tools companies everywhere.

“This is scary,” said Drew Wingard, CTO at Sonics. “We’re used to selling advanced tools at a premium to cover development costs. By the time the market matures, that means we can afford to sell them for less. But if there are fewer buyers at the leading edge, that changes things. We can move to 2.5D, but the partitioning problem is still hard. Or we can do things less optimally but cheaper, which will increase the design starts. And assuming the Internet of Things takes off, we may be doing more design starts of all types.”

Value chain shift
EDA for decades has received a flat 2% of all semiconductor revenues, according to statistics from the EDA Consortium. It’s unlikely the overall percentage will decline—in fact, it could go up significantly—but where that revenue comes from may change significantly.

“The value is definitely there, but it will shift,” said Mike Gianfagna, vice president of marketing at eSilicon. “It used to be the fastest transistor won. Now it’s how much power you can shave off. And in the future, you may be able to stay at the same node and add in more parallelism. The value is moving up the stack. It’s not just at the back end with place and route. It’s architectural. As we move to 2.5D, it will still require place and route and floor planning, but you will also need to model thermal and stress issues, which can render a chip useless. There are all kinds of opportunities to optimize at RTL.”

There seems to be consensus for this shift among the “value chain providers” and integrators, such as eSilicon, Open-Silicon and Synapse Design—all fast-growing EDA customers. All are private companies, but they all are experiencing solid growth numbers as complexity increases and customers need help in sorting through all of the choices. Large IP companies—Synopsys, Cadence, ARM, Imagination Technologies, CEVA, among others—are seeing the same trend, as customers require more services to help them integrate IP and spot problems.

eSilicon, for one, has created an online database that guides customers on choices of a variety of components. Synapse Design and Open-Silicon work closely enough with their customers to eliminate respins.

“The cost is all about the relationship between the different toolsets,” said Satish Bagalkotkar, president and CEO of Synapse. “If the accuracy is off, you pay the price. It becomes more challenging as the number of corners increases, too, so you need more transistors just to deal with that. If you can make the devices simpler, though, you need fewer transistors. You move the value up front and leverage a services model to get the job done. We’re seeing a lot of startups create an architecture and then be done with it. They don’t waste a lot of time on the other part.”

The impact on EDA tools using this model is significant, because tools development costs go down dramatically. “The market could be 100 times larger with far more design starts, so even if the cost of the tools goes down they will become available to a lot more people,” Bagalkotkar said.

These are new economies of scale based upon expertise with commercially available components. For those companies that design and build enough chips, or which provide IP for enough chipmakers to see what works where, there is real value. But how that gets leveraged or translated into dollars is still not clear because it requires a top-down revamping of chipmaker businesses and a rethinking of how dollars are apportioned across organizations. Nevertheless, EDA vendors are working hard at tapping into these changes.

“EDA tools are being upgraded at nodes that are already available,” said Taher Madraswala, president of Open-Silicon. “This can help expand the capabilities of new designs without needing to jump to new nodes. For example, at Open-Silicon, we use tools from all the major EDA suppliers, and have seen firsthand how new features have been added to support existing process nodes to reduce power and improve density. ‘Mid-life kickers’ are being introduced at existing nodes, such as Samsung applying FD-SOI at 28nm, which can bring increased new benefits with the costs of things like 3D masks.”

Madraswala noted that 2.5D and 3D packaging techniques will allow “hybrid” chips, where different parts can be created using different process technologies. It doesn’t make sense, for example, to shrink analog to 14nm when the optimum process node may be 65nm or even 130nm. “This can allow more optimal ASIC solutions with lower design costs and faster derivative times.”

Open-Silicon and GlobalFoundries rolled out a demo 2.5D chip at ARM TechCon last year based on two ARM Cortex-A9 processors manufactured using GlobalFoundries’ 28nm SLP process technology. The processors are attached to a silicon interposer, built on a 65nm manufacturing flow with TSVs to enable high-bandwidth communication between the chips.

More experts needed everywhere
What is clear, in the midst of these changes, is that expertise is absolutely critical and in short supply. And the more choices that become available to chipmakers, the more expertise will be needed.

“The way to measure this is by looking at when chipmakers lay off people,” said Kurt Shuler, vice president of marketing at Arteris. “They get snapped up immediately, sometimes by companies not considered chipmakers.”

He said people are spending a lot more time reworking existing designs these days than moving forward, and they need tools to do this. “There are so many dumb things that were done in the past on the assumption that people would be shrinking their designs. Now they’re not able to shrink everything, so they’re going back and paying more attention to existing designs. Before this, moving to the next node was almost reflexive.”

And more tools…
But even at the bleeding edge of designs, tools are being beefed up—new and existing tools. They have to be, in part because of capacity for the amount of data that needs to be crunched.

“EDA tools continue to take advantage of the multithreading, multicore and distributed computing,” said Pravin Madhani, general manager of the Place & Route group at Mentor Graphics. “Depending on the task, we use one of the above three to continue to improve the capacity and performance of our Digital Implementation tools. As expected, the performance improvement through multiprocessing is not linear and hence, depending on the tasks, after a certain point adding additional processor results in diminishing returns.”

Steve Carlson, group marketing director in Cadence’s Office of Chief Strategy, said capacity is indeed one of the big problems at the leading edge. “Design teams always design the next generation of chips based on the last generation of hardware. Relatively recently we’ve added multicore and multithreading to deal with that and we’ve done a good job with timing architectures and fastSPICE. You’re also seeing innovation in a bunch of different directions, including cross-fabric analysis.”

Carlson noted that it always will be a challenge to keep up with the capacity and complexity of advanced nodes. Beyond 14nm, that appears to be something of an understatement.

Related Stories:

What Are EDA’s Big Three Thinking?

Efficiency Metrics Get Fuzzier

EDA Economics Changing

EDA Shapes Its Future

EDA Hungers For Growth



Leave a Reply


(Note: This name will be displayed publicly)