Second of two parts: Standards, tools and security are all being addressed, but there are no simple solutions.
As the number of possible issues mount for integrating IP into complex chips, so does the focus on solving these issues.
What becomes quickly apparent to anyone integrating multiple IP blocks is that one size doesn’t fit all, either from an IP or a tools standpoint. There is no single solution because there is no single way of putting IP together. Each architecture is unique, and each brings its own problems in terms of power, performance, noise, security and expected use models.
Still, where there are common challenges, there are people working on them.
Tools and standards
EDA and standards bodies are fully aware of these issues. There are currently 13 IEEE EDA standards, and more on the drawing board outside of IEEE. Three of those are directly related to IP (IEEE 1685, 1734 and 1735), and almost all are at least tangentially related to it. But how those standards are applied isn’t always consistent.
“The challenge is not knowing what the other side of the interface is looking for, and that’s what causes problems,” said Yatin Trivedi, director of standards and interoperability programs at Synopsys. “If you’re using a subset of a standard and I’m using a subset, how do you know those subsets are compatible? Part of the problem we’re encountering is the use, not the standards themselves. On top of that, IPs have a lot of content that is sensitive. So the same block may be configured differently and behave differently.”
Trivedi said that IP vendors make assumptions based upon the problem they are trying to solve, but those assumptions can change from one design to the next. “The software engineer needs to know how configurable parts of the hardware are, too. If they don’t get all the details, they may think they only have three domains to work with, but in reality they may have seven.”
One way to solve this is through verification IP, which has become a thriving business for the big EDA/IP companies. Five years ago, VIP was predominantly developed by the chipmakers and integrators to create testbenches. Since then, many vendors have left it up to VIP vendors to deal with the proliferation of different processes from one node to the next, and from one foundry to the next.
“One of the factors driving this is a growing trend in re-use for bigger and more complex devices, and for some devices that are shrinking such as sensors, ASICs and custom analog,” said Jason Polychronopoulos, product manager for verification IP at Mentor Graphics. “At the same time there is an increase in standardization. But you also need to add performance, low power and innovation in standards, which makes it a challenge to keep up with them. We’ve seen a number of standards fanning out, then coming back together. That’s true with the MIPI interfaces, where there are a number of new interfaces that are consolidating over time. It’s also true for PCI Express. Sometimes there are even competing ideas in a standard. If you look at PCI Express, the base specification is 1,000 pages.”
And even that isn’t always enough information.
“Even with the most tested IP you can still have problems because the way you use it is not exactly the same,” said Prasad Subramanian, vice president of design technology at eSilicon. “And you can only find out with prototype testing. Most times, the changes you need can be outside the IP. But sometimes you need to go back to the IP itself and address the problem.”
Subramanian noted that problems are even worse with new versions of standards. “If they aren’t exercised enough, there are more bugs. It takes time for a new generation of a standard to evolve. It might not even be that complex, but it can still cause problems.”
Standard IP vendors don’t like to talk about it, but one of the ways that systems vendors gain an edge is to customize standard IP in designs. Behind the scenes, they are creating a two-tier market of companies that can afford to optimize all commercial IP for their designs, and those that buy it off the shelf.
This isn’t always as straightforward as it might appear, though.
“We pay for that sometimes,” said Darren Jones, senior engineering director at Xilinx. “But that doesn’t always work so well, because now I have a very difficult verification job. What we really want is the interconnect and a way of managing power. And give us the software that works with it, too. As a customer, we just want the pieces. How we put them together is our secret sauce.”
The challenge for IP providers is scalability of their work. They need to be able to write it once and have it work in multiple designs. But given that large systems vendors are at the forefront of design, much of the exploratory work and optimization done for those companies can be productized afterward for a larger customer base.
ARM began dealing with this this several years ago with the introduction of its Processor Optimization Packs (POP) technology, which adds a benchmarking report documenting exact conditions under which its cores have been used, along with a list of implementation details. “What this does is get you to a known starting point,” said ARM fellow Rob Aitken.
Navraj Nandra, senior director of marketing for DesignWare analog and MSIP at Synopsys, raised this issue in a roundtable discussion last spring: “Customers are starting to look for differentiation in IP. That breaks the whole idea behind IP because you want to build an IP company that is scalable with IP that works everywhere. Leading customers are not just starting to outsource IP—they’re outsourcing the entire team. We made a recent acquisition involving AMD’s IP team. We’re starting to deliver a whole bunch of IP to customers. But how do you maintain an investment and margin that can support a standard IP model and still give the customer some kind of differentiation out in the market?” http://semiengineering.com/ip-challenges/
The flip side is how an IP vendor can maintain a reputation for well-tested IP if a more customized version doesn’t work as planned. “In theory, we have more understanding because we develop IP,” said Lawrence Loh, group director at Cadence. “But the cost is higher and there is higher visibility.”
And that visibility isn’t always good in situations where complexity makes integration much more difficult. “You’re seeing a lot more IP that is being characterized for any process node, any variation, any custom circuit design,” said Sean Smith, CAD architect at Soft Machines. “That creates a bottleneck. That’s why there is much more collaboration between the IP provider and their customers.”
Adds Xilinx’s Jones: “The business model is I design my controller. I need to sell it 10 times to make money. It’s got to be cookie-cutter, and that’s a real challenge for the IP industry. It has to be as close to cookie-cutter as possible, but sometimes that’s unrealistic.”
Consolidation, security and an uncertain future
This helps explain why there is so much consolidation in the IP industry these days. Scale allows for more testing, more characterization, and more customization where it is required. A decade ago the IP business was scattered among dozens of smaller companies, such as Artisan, MIPS, Tensilica and Virage Logic. They have been swallowed up respectively by ARM, Imagination Technologies, Cadence and Synopsys, along with many others of varying sizes, with no end in sight.
The new wrinkle on this consolidation is that IP vendors increasingly are buying more EDA companies because having tools to integrate and model that IP is becoming a requirement. ARM bought Carbon Design Systems last week for its modeling technology, and Duolog earlier this year for IP integration. Likewise, EDA vendors are buying more IP companies, creating similar business models in reverse.
Still, it’s uncertain whether consolidation ultimately will create new headaches. One of the advantages of standardized IP is that it be used with confidence in many different designs. That plays into the hands of fewer large vendors, with the channels to sell that IP and the wherewithal to characterize it across many designs. The downside is that it also makes it harder for chipmakers to differentiate themselves, so they are forced to customize that IP, which in turn makes integration more difficult.
On the plus side, consolidation puts the IP development into the hands of companies that can afford to take security very seriously, which will become critical in an increasingly connected world. That includes more robust authentication, more complex designs to hide critical data, and more investment in tools to detect counterfeiting. IPextreme, for example, just rolled out a fingerprinting technology to determine not only what IP is in the chip, but what version of that IP is being used. “This grew out of engagements we had with companies where we saw that IP management was being done on spreadsheets,” said Warren Savage, the company’s president and CEO. “What this also does is detect if IP has been modified. You can scan the IP and create a fingerprint file, then figure out if you have an exact file match. You also can figure out if you have modifications and comments on those files, and whether it has soft or hard IP tags.”Those kinds of tools are welcome news across the chip industry. “Any automation that can help us is a good thing,” said Soft Machines’ Smith. “It’s becoming a bigger and bigger issue. Our customers are asking us the same things about our processors. It’s a valid concern. We’ve seen viruses and worms where people are attacking anything and everything. It’s the same vector for IoT.”
There also are numerous other technologies being developed by large IP vendors to limit access to or completely isolate critical data within both soft and hard IP, such as ARM’s TrustZone, Synopsys’ ARC processor Secure extension, Imagination’s Ensigma, Cadence’s embedded security inside its Tensilica cores, Andes Technologies secure cores, as well as secure add-ons for network-on-chip technology from both Arteris and Sonics. And there are many others in the works as chipmakers begin grappling with the IoT’s most glaring risk factors.
“We haven’t solved the problem yet,” said Xilinx’s Jones. “A lot of these safety-critical systems need to be secure. But at the same time, if you hand this off to a customer and they bring it up in their lab, they’re going to want it to be fully debugged. You have to think about giving them the ability to debug, and then turn that off in production. Did you really turn it off, though? A lot of security holes are software or firmware. But you need to have rock-solid hooks in hardware to enable software to do the right thing. If we put too much in hardware, they won’t be able to enable something and turn it off later. I don’t feel as if we have the right tools yet.”
To view part one of this series, click here.