Second of two parts: Managing pain is literally a growing problem, extending from the block to the SoC to the software developer community and all the way into the cloud.
In part one of this series, the focus was on overlapping and new pain points in the semiconductor flow, from initial conception of what needs to be in a chip all the way through to manufacturing. Part two looks at how companies are attempting to manage that pain.
It’s no secret that SoCs are getting more complicated to design, debug and build, but the complexity is spreading well beyond the boundaries of the chip. Physical effects such as heat and electrostatic discharge have become device-wide issues. Security, meanwhile, can involve many devices. And software development may span far beyond those devices when it involves turning on and off modules, devices, and in some cases controlling power dynamically across a massive data center. Even entire businesses may have to be reconfigured to improve their efficiency and effectiveness because the old silos of separating out software from hardware or manufacturing from IP testing and integration no longer make sense.
“There are two trends,” observed , chairman and co-CEO of Synopsys, at the company’s recent Synopsys User Group. “One is scale complexity, and you can predict that out 10 years. The second is systems complexity.” He added that “the need for handling more complexity is upon us.”
Both types of complexity are forcing these changes, and the tools world has been benefiting handsomely from it. In fact, EDA numbers have grown steadily for 16 consecutive quarters, according to the most recent statistics provided by the EDA Consortium. But for chipmakers, the price includes more than just new tools. Expensive flows and processes need to be reconfigured to include more software development and analysis, more integration of third-party , and business relationships have to be rethought on every level. At the leading edge of design, that also means placing big bets on new processes, integration of more third-party IP into non-standard configurations, and making assumptions that big gaps in will be filled.
These aren’t just digital issues anymore, either. At every level, analog is creeping into digital design as more sensors and power management are added everywhere.
“In the entire design flow, mixed signal is still an art — analog synthesis and analog verification is still an art,” said Oz Levia, vice president of marketing and business development at Jasper. “Interestingly enough, what we see and what we hear from customers is that even the digital Verification problem is getting worse for them, not better. It’s not just our regular dig at simulation. People are now turning more to and to formal[verification]. It’s because simulation is not doing the job, so that’s an acute problem for people.”
The same stuff, only much better
That has led to a demand by customers for better tools—but preferably with minimal additional learning required. And it has pushed EDA vendors to ratchet up the performance of their tools, as well as their ability to work with other vendors’ tools, as in the case of co-design of hardware and software; chip, package and board; and in pre-integrated subsystems. In fact, the amount of work going on across the industry to boost the performance of tools is astounding. There has never been an equivalent effort in the history of the semiconductor industry.
“There is generally a sense that designers are going to rely more and more on software and less and less on hardware,” said Levia. “What we see is more and more reliance on software, and therefore the design and the verification of software/hardware interaction is going to become more and more acute and is becoming a challenge. And it’s software at all levels: bare metal, embedded, deeply embedded, application, OSes — all levels. It’s becoming more and more acute. And not too many people have automatic solutions for that — really nobody. Today, to do software/hardware verification you need to address 500 million gates and 2 million lines of code. With heterogeneous processors, with parallelizing systems and compilers, it’s a bear. It’s even a bear to think about. It’s a big problem. In the past, we were catching the bear cubs and learning how to catch bears. This is a whole new level.”
Like all engineering problems, it’s a matter of understanding the big issues, and breaking it down into manageable pieces. That’s why every toolmaker has been improving tools in every area—even those that have been neglected for years. Design cycles are being tightened, tools are being shifted from single processor to multi-processor capability, and everything is being integrated much more closely.
This doesn’t solve all problems, but it certainly makes some less painful. And this needs to be repeated on every level.
“PCBs don’t have the same luster of the silicon world, where at every node you have to retool,” said David Wiens, product marketing manager at Mentor Graphics. “But we’re seeing the same kinds of complexity increasing. Five years ago the lines and spaces were 250 microns. Today they’re 50 microns. The micro-via technology is a derivative of semiconductors, and you have to think about more stuff simultaneously—signal performance, power distribution, thermal effects and manufacturability. All of that has an impact on the layout phase. On top of that, we’re seeing new demographics for the people laying out designs.”
In the past month, both Mentor Graphics and Cadence — the market leaders in PCB tooling — have added significant upgrades to their PCB tools. This is an area where tools tend to last for decades, which is exactly what happened with the previous versions. The new releases use a multiprocessing base to improve performance by at least 10 times.
Synopsys, likewise upgraded its place-and-route tool, despite the fact that the market for these tools has seemed relatively flat according to market data. At Synopsys’ recent user group meeting, De Geus called it “the largest single technical endeavor Synopsys has ever done.”
And while the Big Three EDA vendors garner the lion’s share of attention, that kind of improvement is happening everywhere in the EDA industry. Tanner EDA, which in the past has provided tools primarily for analog developers, has shifted from analog to mixed signal. “We’re constantly seeing requests to upgrade and add new features for layout, schematics and cross-probing,” said Greg Lebsack, president of Tanner EDA. “But the R&D roadmap has changed. It was always customer-driven to an extent, but now it’s even more customer-driven.”
Many gaps remain
What’s tugging at a lot of chipmakers, though, is a feeling that designs aren’t bulletproof anymore. The reality is that they never were, but when chips were smaller and simpler—fewer functions, gates, power domains, voltages, less third-party IP, less embedded software—at least there was a feeling that a good engineer could understand most of what was happening on a chip. The push to incredibly complex SoCs has changed that, forcing more reliance on tools, black-box IP, and forcing engineering teams to deal with many more physical and proximity effects that are hard to validate and verify effectively.
“As we add more complexity, we’re not just able to keep up using the same approaches,” said Aveek Sarkar, vice president of product engineering and support at ANSYS/Apache. “We have to do more. But if we do it quickly, are we doing it wrong? Customers’ designs are getting bigger. It’s no longer just the chip. It’s the package and the chip, and you have to turn it around in a reasonable amount of time.”
It’s also the number of pre-fab pieces that are being included inside of SoCs to reduce time to market.
“There is a big pain point around the integration of all the pieces,” said , a Cadence fellow. “For the past decade we’ve had to decide what to do to solve problem A, problem B and problem C, and when we figured that out we went back to our business and were told to be happy. But now there’s overlap because you want one subsystem to do A, B and C, so you have to come up with an effective solution for a diverse mix. Plus, the overall infrastructure for hardware and software and what you need for memory, bandwidth and the interconnect aren’t clear. So there are a number of plumbing issues and software issues, and then you have to figure out what your application environment is going to be.”
And it’s not just the application environment. It’s how those chips will be used within an application environment.
“One the most pressing issues we see from our customers is how to design and develop an SoC that offers the most efficient solution in terms of performance, power and also flexibility, which tend to vary from device to device,” said Eran Briman, vice president of marketing at CEVA. “As the use cases and feature sets of mobile devices are constantly changing, new issues and challenges arise that need to be addressed. The latest of these involves how to efficiently handle complex multimedia processing required in today’s mobile computing devices, mixing ‘screen-off’ applications, such as always-listening voice triggering or face unlocking, with ‘screen-on’ applications such as object-based audio playback or 3D depth mapping. Scaling the power consumption of a design to be able to juggle between high-performance and low-power use-cases is a true art, but with the help of certain IP blocks available today we can reduce designers’ pain in this space.”
Still, even the most routinely used blocks take on new challenges in complex designs. Consider memories and I/O controllers, for example. This used to be pretty routine stuff for design teams. But it’s not so simple when companies start changing the speeds of their SerDes IP and waffling back and forth on process technology.
“There are a lot of hard problems around high-speed interfaces,” said Frank Ferro, senior director of product management at Rambus. “There are different kinds of SerDes interfaces, different speeds, and different process nodes. We’re seeing more requests for quotes at 40nm these days, but some customers are now asking, ‘Do we go with 28nm or 40nm?’ The main tradeoff as we get into higher performance is power efficiency, but there is also worse leakage. So how much leakage can they tolerate? Because of that, we’re getting requests to port high-speed SerDes interfaces into LP processes.”
These kinds of decisions ripple out across a design into other areas, too, setting off chain reactions of decisions that affect everything from the IP to the speed of buses to the size of memories and where those memories are located—and creating confusion just about everywhere. “We’re seeing RFQs (requests for quotes) all over the map,” said Ferro. “It’s not always obvious where they’re going.”
This creates chaos further down the flow. Understanding how blocks interact is a big issue for chipmakers, but being able to do something about it from a tools perspective frequently happens too late in the flow.
“The idea that you’re estimating power with accuracy is great, but it’s happening way too late in the cycle to be valuable,” said Mark Baker, director of product marketing at Atrenta. “There are a lot of statistical approaches to RTL estimation. You can bring physical awareness into the problem to remove uncertain and identify hot spots in the design and reduce them or optimize them out. But what we still need, as an industry, is to bring context into the use of IP, and also to understand how to use that context to determine power, performance, area and timing. That requires the ability to build better abstraction models for IP, and there is work to be done there.”
The bottom line is that problems aren’t getting easier to solve. In fact, many of them are getting more difficult, even with new tools. On top of that, new problems are being created. But what’s also happening is that more tools are starting to be used that in the past were designated as ‘something to use down the road.’ That includes high-level synthesis and formal verification, among others. Formal, in particular, languished for years in niche markets because there were too few engineers who could craft assertions. Formal approaches suddenly are becoming ubiquitous.
“We’re used to thinking of technology adoption on the Geoffrey Moore timetable,” said Randy Smith, vice president of marketing at Sonics. “The tools were there and they worked on block-level problems, by anyone trying to do full-chip verification was kidding themselves. What’s changed is that different problems are easier to solve with these tools, and the tools are easier to use—and the criticality of the problem requires that you use them.”
Leave a Reply