Will The Chip Work?

First of two parts: Strategies are changing to deal with more IP blocks in more complex SoCs.

popularity

IP is getting better, but the challenges of integrating it are getting worse.

As the number of IP blocks in SoCs increases at each new process node, so does the difficulty of making them all work together. In some cases, this can mean extra code and a slight performance hit on power and performance. In other cases, it may require more drastic measures, ranging from a re-spin to a new architecture with different IP.

To deal with this, integrators and chipmakers are rolling out a number of different strategies and methodologies that are as unique and varied as the chips they’re building. Sometimes those approaches work quite well, sometimes they don’t. But what’s becoming obvious, based upon in-depth interviews with dozens of engineers conducted over the past 12 months, is that a significant shift is underway in how chipmakers and service providers are approaching IP integration—and how the IP industry is serving its customer base.

Among the common themes that have emerged:

Risk Assessment. At the very top of every chipmaker’s list is how much risk they incur by adding IP into an SoC. This kind of number crunching is what initially made the commercial IP business so lucrative. If a block doesn’t differentiate a design, there’s no reason to build a processor core or memory controller or standard interface, and there’s a strong likelihood that someone who specializes in IP can do it better, faster and cheaper.

The problem is that the number of blocks in an SoC is increasing faster than the ability of tools and standards to automate this process. In a speech to the IEEE last week, Cadence Fellow said there can be more than 120 separate IP blocks at 16/14nm, and there will be even more at 10nm. There is no possible way to fully characterize those blocks against other blocks, particularly internally developed IP that may have been written for a different process.

“Some of the regular things we deal with don’t work correctly,” said Darren Jones, senior director for processor systems design and verification at Xilinx. “Does the IP vendor verify things that we, the SoC integrators, need? That’s still a challenge. There is a wide variety of quality. There are a wide variety of methodologies. We have more IPs to integrate, more clock domains. Our job as integrators is a bigger challenge.”

Jones noted that at the IP selection stage, much of this is very high-level, creating a disconnect with the integration stage. “At the implementation level, a lot of IP vendors get it close enough that we can integrate it pretty well. It may not be perfect. But it is a challenge. We need the IP vendors to step up more and help us with our integration challenge.”

There are two big complications here that can undermine any risk assessment, though. First, it’s common for chipmakers to try to stretch internally developed IP from one design to the next, which may include pushing it to the next process node or from one foundry’s process to the next (which doesn’t work well after 40nm). How internally developed IP plays with commercially available IP is always a big unknown. And second, commercial IP vendors have no idea when they develop their products how it will be used or what an IP block will be next to, so they characterize for a wide range of expected scenarios. That still requires integrators and chipmakers to make some guesses about what can go wrong in their particular design, and the best way to do that is good old-fashioned due diligence.

“One of the challenges of implementing IP is ensuring that it works as intended when integrated into the full solution,” said Steven Woo, vice president of solutions marketing at Rambus. “As process geometries continue to shrink, process constraints and effects from neighboring blocks can affect how an IP block functions. We’re seeing increasing challenges not only with designing for functional correctness, but also in maintaining good signal integrity and power integrity within the chip and the system as a whole. It’s important to understand the target environment very well when designing modern IP blocks.”

That’s the hard part, and it takes time, resources, experience, and a fair number of assumptions that may or may not be correct.

“We talk to the IP vendor to understand how they verify and test chips, how their IP is characterized, and how they exercise voltage, process, and temperature corners,” said Prasad Subramanian, vice president of design technology at eSilicon. “That way you know the shortcomings and can assess the risk. If it’s too risky, we may not use it, or we may create our own test chip with that IP. And then, if we’re not satisfied, we may be able to change the spec and have different IP. But the big issue with IP is always corner cases, where it works 99% of the time. You need to be prepared for surprises.”

This is still a vast improvement compared with third-party IP from a decade ago, when it was uncertain whether it would work at all. Stories of companies buying IP and spending more money to make it functional than the price of the IP were not uncommon. The challenge now is in the integration details, not trying to understand the inner workings of the IP and make it functional.

“IP used to be like a flea market, where there was lots of uncertainty in what you bought,” said Cadence’s Rowen. “Then it moved to an IP warehouse, which was not flexible. The new model is driven by the end application.”

Power issues. While commercial IP vendors have been preaching the need for low power for years, it wasn’t until the proliferation of the smartphone at the early part of this decade that low power really hit home in the consumer market. And in the data center world, it wasn’t until corporate reorganizations put the IT electric bill under the purview of the CIO rather than the facilities manager at the turn of the Millennium that more efficient computing became a requirement—and only in the past several years has the focus shifted from server utilization to more efficient server architectures.

At the chip level, power concerns come in many flavors—leakage current at 40 and 28nm, dynamic power at 16/14nm, thermal impacts, power-related noise and signal integrity, ESD, electromigration, power spikes from turning blocks on and off, power drain from always-on IP. The list goes on. But for IP, any of these problems can be compounded as one IP block interacts with another, even if it’s only in the general proximity of another block.

“In the past, when we stitched things together, power wasn’t a concern,” said Sean Smith, CAD architect at Soft Machines. “But now with SoCs, the concept of building IPs and just putting them together isn’t the same. There’s a lot more work that has to be done. Complexity is really going up, and so far the industry hasn’t found an effective way to deal with that.”

At least part of the problem is that power is a global concern, while the focus on IP has traditionally been much more contained. Developers of applications processors for mobile devices have been bridging that gap for the past half-decade, but the rest of the market is just now starting to grapple with that problem as mainstream moves to 55, 40 and 28nm designs.

“Traditionally, IP vendors worried about whether their IP is as low power as possible,” said Lawrence Loh, group director at Cadence. “But with clock gating and power domains, the challenge is to make sure that the whole thing works. We have to really think through power up front. We’re seeing it happen, but it’s a little bit slower than I would have thought.”

One of the big changes here is margining. The smaller the feature sizes, the more margin can cost both performance and power, which means that IP choices and integration become more difficult at each new node. The challenge is particularly acute for companies developing hard IP at 10, 7 and 5nm.

“If you push the area too much and a critical rule changes, you might have to redesign the IP,” said Rob Aitken, a fellow at ARM. “If you don’t push it enough, your IP may be uncompetitive. We’ve taken a predictive approach about what’s the best guess for the effects of quadruple patterning and EUV. So for standard cells, if you use this process, what’s the relative CPU performance and what impacts that? It’s worked well for us.”

Aitken noted that the margining process, applied selectively and carefully, is extremely helpful in avoiding finger pointing if something goes wrong in a design— even if it’s more tightly defined than at older nodes. “So for SRAM, you want to make sure your characterization with certain properties is within 10% of Vdd. And if your I/O changes, at least the IP will continue to work.”

Part two of this series will address how EDA and IP vendors are approaching these problems, and what effect consolidation is having on IP integration.



1 comments

Harrison Beasley says:

The GSA IP working group released an “IP Source Selection Tool” (http://www.gsaglobal.org/gsa-resources/tools/ip-source-selection-tool/ ) for use in evaluating Risk and Cost of acquiring from various sources. I’d be happy to discuss with your readers.

Leave a Reply


(Note: This name will be displayed publicly)