Who’s Profiting From Complexity

System-level tools are taking off, and getting much better as revenues increase.


Tool vendors’ profits increasingly are coming from segments that performed relatively poorly in the past, reflecting both a rise in complexity in designing chips and big improvements in the tools themselves.

The impacts of power, memory congestion, advanced-node effects such as process variation, electromigration and RC delay in interconnects and wires, and an explosion in software—all within the same or tighter market windows—are being felt at every level and in almost every organization. They are causing angst among engineering teams working on mobile devices and enterprise servers, and it is transforming areas such as , simulation, and modeling into major profit centers that now can justify a significant investment in R&D.

While tool vendors’ profits always have been largest at the most advanced nodes, it has taken nearly two decades for some of these tools and methodologies to really get rolling. There were several main reasons for the delays:

• Until recently, there was not enough horsepower in the tools to make a business case for the investment by the vendors. Simulation was good enough for most designs, and where it fell short there was always the ability to throw more simulators and manpower at it on a last-minute contract basis. That approach began running out of steam at 45/40nm, and especially with finFET-based SoCs.
• While there always was interest and attention paid to high-level modeling, there also has been a certain level of distrust about their accuracy and what has been abstracted out. That distrust increases with the complexity of what’s being modeled. Where models have shown the best adoption are relatively narrow segments, such as star IP, where they can be fully characterized against a wide range of conditions and use models. But even that is beginning to change.
• A culture of experts and expertise is at the core of design automation’s evolution. Chipmakers tend to trust people more than tools, which has always made the adoption of automation relatively slow, and prior to 45/40nm design houses were able to get away with that in more places. But complexity has swelled far beyond human comprehension. Add to that the need to begin software development earlier in the design cycle and it has become impossible to remain competitive without emulation, advanced simulation, FPGA prototyping and at least some high-level modeling.

Much of this is changing the profitability equation on the tools side, and the stakes in this game are rising proportionately. Mentor Graphics and Cadence both rolled out virtualized emulation recently, which allows more processing power to be added as necessary. At the same time, Synopsys has been actively pushing into software tooling and prototyping.

The message behind these moves is that complexity is lucrative for everyone if it can be adequately managed, and bad for everyone if it is not.

“We’re seeing two types of SoCs being developed that are using emulation,” said Jean-Marie Brunet, marketing director for Mentor Graphics’ Emulation Division. “In one bucket are SoCs that rely on standard interfaces like USB or PCI Express. They’re a complex set of interfaces, but they are standard, and you can move with a high level of confidence with models for complex SoC tasks. The second involve complex SoCs where the interfaces have slight modifications. These companies don’t share their core IP, so it’s difficult to model and it takes a lot of time. In the first bucket, the race is on for how quickly you can do an SoC and verify it. In the second, it’s a more difficult environment to capture the physical target and bridge to the virtual world. It requires a huge amount of modelization.”

This causes its own set of problems. Some engineering organizations can adapt to the changes, some cannot.

“It’s not a shift that happens naturally,” Brunet said. “Bucket No. 1 drives tremendous standardization. We’re seeing new protocols by the quarter, and all of the IP providers are being forced to stop and develop models. It’s a self-fulfilling ecosystem. Bucket No. 2 is forcing EDA vendors to provide more solutions.”

IP vendors are indeed developing models. ARM purchased Carbon Design Systems in October because of its modeling technology, which can be used to more quickly verify ARM cores in designs.

“Historically, modeling has proven extremely important for companies working the mobile sector because of the design methodologies being used,” said Bill Neifert, director of models technology in ARM’s Development Solutions Group (and Carbon’s former CTO). “The increasing complexity of SoC design for other consumer-facing markets is now driving further adoption. Accurate and early modeling will deliver design efficiencies and a faster way of getting to market, and we are already seeing this in the server and enterprise sector.”

The acquisition of Carbon works well for both Carbon and ARM. On one hand, it gives Carbon access to ARM’s designs much earlier than in the past—sometimes up to a year earlier—and it allows ARM to develop sophisticated and well-tested models for its new cores and software before they reach customers. That helps cut through some of the complexity, particularly for SoC integration because detailed IP characterization already has been baked into those models, and it speeds time to market.

Second-best choice?
What’s less obvious here is that tools that were pushed aside for years have largely come to the rescue because no elegant solution has ever been developed for hardware-software co-design or one-button verification. These are generally brute-force solutions, and the muscle offered by or alongside these solutions is making them very popular.

“Emulation is really an another way of doing hardware-software co-design, and so far no one has come up with a sophisticated way of doing that,” said Mike Gianfagna, vice president of marketing at eSilicon. “You’d like to have both an instruction- and cycle-accurate model of everything and be able to do it in software. Right now you have to design and debug the chip, and then design and debug the model of the chip. There is no other way. And there is no model capable of doing software.”

He’s not alone in that view. Not everything has worked out as promised a decade or more ago, which puts the emphasis back on better processing power and throughput. The big changes there are scalability of that compute power and virtualization, meaning the compute resources can be shared by groups that may be scattered in different locations and time zones.

“We need a breakthrough in making verification smarter,” said Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “While doing that, we are thankful about emulation and system-level modeling, both of which can help address this. Complexity has grown so much that mistakes are unavoidable. There are so many things you can overlook, which is why you need the ability to look at the system more holistically.”

Models historically have been a mixed bag for design teams. For star IP, such as processors or standard interfaces, those models are well defined. That’s due, at least partly, to the fact that they are used in more designs and are used for multiple generations of hardware.

“The problem for models is new content and different assemblies,” Schirrmeister said. “If you put it together in a different way than in the past, it may break.”

Boom time
Nevertheless, the rising level of complexity, the emphasis on getting software faster, and tighter design schedules leave engineering teams little choice about buying faster tools and focusing on higher levels of abstraction where it has been proven to work. And as adoption increases, so does the familiarity with new tools and methodologies and experimentation with those tools. That, in turn, drives more sales. And increased sales drive tools vendors to invest more and compete harder, which is reflected in new capabilities such as ease of use and new standards efforts to enable them.

“New use modes for emulators enable much wider user base of emulation than it was in the past,” said Zibi Zalewski, general manager for the hardware division of Aldec. “We all remember how traditional bit-level acceleration worked. It was actually quite hard to achieve good speedups, especially with simulation consuming test-benches. It was also hard to re-use classic test-benches to connect with an emulator, which usually communicates via transaction messages. Things are changing now with the adoption of transaction level methodologies in the test-bench development. Increased usage of UVM is bridging the gap for easier integration between the simulator and an emulator. Once we have UVM implemented, or in general, transaction level test-benches, it is more natural to connect with an emulator and benefit from the speed it provides. The test-bench remains practically the same with an additional function layer for connectivity. Such an emulation mode is ideal for hardware developers. Their test-bench is reused and simulation is accelerated with the very advanced debugging utilities available in emulators.”

The result has been a huge increase in the popularity of emulators, and a growing reliance on sales for tools vendors. In a statement to analysts this quarter, Wally Rhines, Mentor’s chairman and CEO, attributed part of a shortfall in quarterly sales to a stall in emulation sales while users evaluated new equipment.

The impact of software
Software teams have always been big believers in emulation, but until recently they have never had the internal resources to buy it. Even five years ago it was not uncommon to hear stories about software engineers using emulators in the middle of the night while the hardware engineering teams were asleep. But as software increasingly becomes integrated into hardware design, those kinds of stories are becoming far less common. The result has been speedier designs, and much better integration of the two worlds.

“Software teams had been testing the software using virtual models and platforms, usually starting the project earlier than RTL code for the whole SoC or a subsystem was available” said Zalewski. “To migrate software tests to hardware, everybody had to wait for designers to provide hardware-implementable code and then run it in emulation or prototyping. That means software and hardware teams were working separately, and SoC-level verification can happen mainly at the prototyping stage. To resolve the problem of SoC-level testing at earlier stages of the project and shorten the overall project verification times, the integration between virtual environment/models and emulators has been introduced. Since part of the SoC is available as a virtual model while the rest is RTL, the idea is to connect those two worlds and provide SoC-level verification much earlier than it was available in the past. This again enables more users and actually gives the access to the whole SoC or main subsystems.”

That hasn’t completely dampened complaints about resources, though. Cadence’s Schirrmeister said software teams will always complain about the speed of emulation, which is never fast enough for them, or FPGA prototyping, which runs faster but takes longer to bring up.

“Software is insatiable with respect to speed,” Schirrmeister said. “But what’s key here is that the software needs to be able to talk to the hardware, and at each new node emulation needs to run more and more cycles. Even for edge nodes, while they may be small, they do interact with the system around it.”

What’s becoming apparent with sales of emulation, FPGA prototyping, and even advanced and specialized simulation, is that no one has figured out how to leverage abstractions to the point where the same hardware will suffice. There are simply too many variables, to much change, and far too much emphasis on doing designs that are different from competitors for one size to fit all.

That puts the onus back on more horsepower, abstraction in more limited doses, and methodologies for fusing all of this together. And while it isn’t perfect, it does get the job done more quickly—and increasingly for more elements within a design, including software. Jumping back in time, these approaches may have seemed like the second-best choices. But design has always been an evolutionary process, and put in perspective these increasingly look like the best options for managing design complexity, ensuring hardware and software both work properly, and establishing a long-overdue direction for tools vendors so they can continue to refine and improve these options.

Leave a Reply

(Note: This name will be displayed publicly)