Re-using or buying IP saves time, but the results aren’t always predictable. Here are several ways to improve your chances for success.
Every design these days, regardless of whether it’s a processor, an SoC, an ASIC, FPGA or stacked die, relies on a combination of re-used and third-party intellectual property. No company—not even Intel, Apple or Samsung—has the capability of building everything itself within a highly compressed market window.
There is a spectrum of IP use and re-use, of course. In some cases, it may involve a handful of blocks. In others, particularly in the burgeoning Chinese electronics market, startups are competing based upon third-party IP and reference designs and that IP makes up the bulk of the design. But no matter how good the characterization of any IP—energy requirements, noise limits, frequency, signal path and access—or how limited its use, inevitably something will go wrong. The chip may run too hot, draw too much current, suffer from intermittent signal degradation caused by noise from another operation, or it simply won’t work at all. And when that happens, the result is lots of finger pointing. Still, it’s not always clear who’s really at fault, and the finger pointing tends to start wide and work its way in before the real culprit is found.
This scenario is all too common. Moreover, it’s counterproductive. It costs money and takes time to resolve. It also frazzles the nerves of dedicated engineering teams, often pitting one against another with the same company. And even worse, it frequently leads to a less-than-perfect design. So what can be done about it? The answer is still evolving, but there are some clear directions emerging, along with some common observations about where the problems begin.
What goes wrong
“The challenge is that IP is used in many ways, and when you design within a power budget many of the embedded systems may not work properly if everything is on,” said Jon McDonald, technical marketing engineer for the design and creation business at Mentor Graphics. “So you’re modeling use cases, but how much of the system is going to be on all the time? You need to make sure you have the appropriate controls because those are really critical to the power system.”
McDonald said IP can be designed in isolation, but how it’s going to be used at the system level is where the problems begin. “It’s not hard to get to the point where you have hundreds or even thousands of power states in a complex design. Understanding which power state it should be in and what to do about it is even more complex.”
He noted that for most designs, the accuracy goal for power estimates is 80%, but that in most cases design teams are happy if it’s more than 50% accurate—as long as they can understand the tradeoffs between one approach versus another, or one piece of IP versus another, with enough clarity to make an educated choice. But lack of accuracy also can create other problems in highly complex designs.
“There are three problems you need to deal with in integrating even two IP blocks together—functionality, timing and power,” said Ajay Jain, director of product marketing at Rambus. “With functionality, there are corner cases you probably never considered when you looked at IP by itself. The interface could have ambiguities. With timing, the interface has a set of assumptions, and you have to match the timing requirements on both sides. With power states, those need to be defined as part of the spec, particularly with regard to switching power. And that’s just on the digital side. On the analog side, you have to look at the fact that it will suck power whether you like it or not. All the pieces have to be budgeted, but even if they all look good, what happens when the blocks are supposed to do something? How well you do your power estimation is the key.”
Planning is critical, and not all companies are good at it. Or, even worse, they start out with what Chris Rowen, a Cadence fellow, describes as wishful thinking. “The IP numbers may be perfectly accurate under one set of circumstances, but the actual context is different when you hook it up to lots of memories and buses, which are never in an ideal location. Or after many iterations you start thinking about how you’re going to minimize something to fit on a die, and because it worked at the last generation you can just divide by two for the estimated floor plan. But the reality is that many pieces of IP are fiendishly complex, and even though it’s only one square millimeter it has a few million gates, and it’s being used with a different metal stack, different library versions and memories alongside them instead of above them or with a bus that runs through the middle.”
Add to that process variation and physical effects below 28nm and the troubles get even worse. They also show up in incompatible methodologies and approaches to use IP.
“The areas where we have seen integration and implementation issues are in design margin, where IP suppliers tend to margin their IP in a balanced approach targeting the center of the process with 3-sigma margins, leveraging the fact that foundries tend to have margins in their SPICE models,” said Ron Moore, vice president of marketing for the physical design group at ARM. “However, some applications, such as server and networking, want to ensure functional operation at 6-sigma.”
Moore said that aging and reliability analysis causes problems, as well, because IP suppliers don’t have time, sufficient silicon or enough customer data to understand the effects of aging on their IP. And signoff methods differ between foundries, which usually require worst-case SS corners while IP vendors prefer less-pessimistic SSG corners. And all of these areas “combine to add fuel (and religion) to the debate on methods to handle OCV.” That includes constant across the chip, stage-based derating, path-based derating and other statistical variants.
“It is important that designers integrating IP from multiple vendors know the methodology for margins and characterization used by each supplier and how that needs to be accounted for within their own chip-level methodologies,” he added. “The differences can lead to missing the expectation of PPA. A simple example is the design of the power grid for the SoC could favor one style of signoff over another, making two suppliers be on opposite ends of the benchmark or support issue.”
Fix No. 1: Use bigger pieces
There has been talk for years about putting together more pieces of IP into subsystems. While subsystems are available for such functions as video and audio, the majority of integration efforts underway today don’t fit into any neat category. This is particularly true as chipmakers gear up for the Internet of Things, where the key concerns are power and connectivity but design budgets are much smaller than for smart phones.
“There’s been a lot of talk about always-on applications and the Internet of Things,” said Eran Briman, vice president of marketing at CEVA. “When you look at the devices that are connected, the first layer is probably Bluetooth, WiFi, Zigbee, and the next layer is sensing for things like motion, sound or humidity. The next step is ‘always on.’ You see that with Google Nest, which is always listening, or a smart watch, which is always connected. The fourth layer is audio and voice, which is what you get with Google Glass and also a smart watch. But what you find is that, at least for now, the user is at the center of IoT devices. It isn’t a vending machine. It’s communication with a person, and the main issue in this market, which is largely wearables, is power consumption. That requires a different way of thinking.”
Briman said power is what is holding up mass adoption for these devices because you don’t want to plug in a watch to recharge it every five hours or even every few days. “The reality is you need about seven days of battery life, and the only way to do that is an application-specific processor with multiple functions built in.”
CEVA isn’t the only one to have spotted this trend toward integrated IP. ARM’s CTO Mike Muller described it several years ago as “bigger LEGOs.”
“One chipmaker’s automotive infotainment chip has six different chips integrated into one,” said Aveek Sarkar, vice president of product engineering and support at Ansys-Apache. “With software-controlled radio, you have to worry about noise of one radio to another because it goes through the silicon substrate, which means you need to do substrate analysis. That’s why the software guys are now working with the package and chip guys. It’s definitely important to worry about this because it affects whether the IP will work and whether it will work in context.”
And one way to solve that is to integrate more IP into a single offering. Whether this is a subsystem or a bunch of integrated functions is a semantics issue. But the problem being addressed is the same—complexity requires some level of pre-integration.
Fix No. 2: Run more use cases, include more characterization
There’s an inherent irony in the increased amount of IP in SoCs. More complexity requires more third-party IP and more IP re-use, but it also raises the number of possible use cases and increases the complexity in a different way.
“We’ve seen almost every use case you can imagine because our installed base is so big,” said Navraj Nandra, senior director of marketing for DesignWare Analog and MSIP Solutions Group at Synopsys. “But the way to solve some of this stuff is basic rules of floor planning and layout. Sometimes the problem is the impedance of the substrate, and with finFETs it all comes down to the substrate.”
He said that at 14/16nm, the questions being asked by companies using IP are more complicated. “We’re seeing feature merges or the customers are expecting more from the IP. So the dialog we’re having may be around a beachfront they’ve established on the SoC, where there are five or six interfaces with some internal and some external IP and they all have to talk at the signal integrity level. Or it involves the timing budget. But as you put more pieces together, you have to deal with horizontal issues—which are across a set of titles—and vertical issues, which deal with the PHY to the link to the firmware to the software.”
That level of complexity also makes it difficult to swap out one vendor’s IP, or even IP from the same vendor, for different IP that may offer better performance or lower power in a particular context.
“With soft IP, it’s wide open how it’s used in the context of an SoC,” said Mark Baker, director of product marketing at Atrenta. “The question is whether you’re going to trigger all of the capabilities of that IP, and any time we sit down with customers what comes up is their ability to get IP to perform as it’s supposed to on the spec sheet. A lot of times they can’t do it.”
Baker noted that the problem gets worse with finFETs. While the leakage and performance are improved, the design has to be optimized to take advantage of more dynamic power. That requires changes in terms of what you want to verify and the power intent.
Fix No. 3: Learn the tools
Part of the issue involves the people working on the chip, too. While some companies have been working with complex power issues for the better part of a decade, others are just beginning to grapple with it at 40nm and 28nm. Integration of IP with power issues is a lot different at those nodes than at older nodes.
“There is a class of customers that are experts in power,” said Mark Milligan, vice president of marketing at Calypto. “But there’s a big change happening in the middle of the market. They have no deep expertise and they’re starting to do analysis. Their best efforts are based on power analysis, and it’s not working well. They’re not getting results. Those folks need tools that help them implement low-power optimization techniques quickly.”
As with any profession, experience is important, and with power it’s especially critical.
“You can’t take numbers for granted,” said Lawrence Loh, vice president of engineering at Jasper Design Automation. “The IP in theory may work in several modes, but how many possible different ways are there to use it?”
There are no simple answers, and as designs move to smaller geometries, expertise is critical. The problem is that it isn’t always readily available within companies. “There will always be surprises with IP and power,” said Loh. “The key is to be able to specify the scope that it’s going to be used for.”
While that sounds simple enough in theory, it’s a lot harder to do with a complex SoC. Problems will arise, IP companies will be accused of providing misleading specs, and the flow of blame will continue from one group to another and back again. And at least for the foreseeable future, it appears that chain of events will continue.
“In the IDM world methodologies were consistent top-to-bottom,” said ARM’s Moore. “In our disaggregated industry these become potential problems at the late stages of SoC signoff that can result in a ‘customer in distress.’