First of three parts: Defining IP quality; what customers want; IP quality standards.
By Ann Steffora Mutschler
Semiconductor Engineering sat down to discuss the best ways to improve the quality of design IP with Piyush Sancheti, vice president of product marketing at Atrenta; Chris Rowen, Cadence Fellow and former CTO at Tensilica; Gene Matter, senior applications manager at Docea Power; Warren Savage, president and CEO of IPextreme; and Dan Kochpatcharin, deputy director of IP portfolio marketing at TSMC. What follows are excerpts of that conversation.
SE: What does IP quality look like today? How do we define it? How do we measure it?
Kochpatcharin: From the foundry point of view, there are a few ways of looking at it. It ranges because IP is a niche business. In terms of quality, looking at it from a functionality standpoint from the actual deliverables and the usability—we see a range. As far as the customer perception right now, they see that they need to use IP but they still have to do a lot of incoming tests. In talking with certain customers, they use external IP because they can’t build everything but they still have to build huge teams just to do incoming testing and verify functionality and so on.
Savage: Customers have this somewhat reasonable expectation that they want IP that just works, but in my experience I see a little more optimistic view of the world. Most of the companies that I deal with are semiconductor companies that are developing IP for internal use and a few third party guys that have worked in big semiconductor companies, and the world has changed quite a lot. In the past, it really was the Wild West. The industry has matured enough that people understand you need to pay attention to your methodology, to your checking—and it’s a process, a design that lives a long time. It’s not this throwaway concept that it used to be in the earlier days. So I’m a little more optimistic about what I see out there and the attitude toward quality is probably at a different level than it was even five years ago.
Rowen: The question of what do you do to define quality, or what do customers expect for quality, actually depends a lot on what type of IP it is. There are really vast differences between a relatively simply, standard-spaced interface IP and something that is not just an IP but in fact a whole ecosystem. Processors are a great example of that. And because the more complex the IP, the more types of deliverables there are, the more variety there is in the criteria. But there’s one kind of quality expectation people have about the correctness of the RTL itself—it’s an extremely high, absolutely flawless, mindless commitment to quality. If you take something like the quality of the open source GCC documentation, there’s something of a different standard of quality. Of course people want it to be correct, but the ramifications of a typo in the GCC open source documentation is different from the ramifications of a typo in the RTL, in the core finite state machine. So, when you develop a strategy around quality, you have to look at not only what people want, which is perfection for everything—and that’s an appropriate hope—but really, what’s the cost of quality from the customer’s point of view? Clearly, anything that is going to affect the functionality of their device, it’s conformance with its specification, or significantly affect the time to tapeout, time to production, are going to be one set of things. There are other things that clearly are a little bit less sensitive because they are going to be minor headaches along the way, which isn’t to say you should be blasé about those things. But as you focus resources and as you think about the dialog with the customers, it’s clear that that core what-goes-in-the-chip product quality is going to rise to the top.
Sancheti: I echo what Chris is saying. In some ways a measure of quality for IP is what the customer/consumer expects to get out of it, so having that common understanding of what is being delivered is ultimately what matters because the space of the variations these days on the number of IPs, the size, the scale, the amount of functionality that goes into it is infinitely large. From that standpoint, establishing one objective standard of quality is mission impossible in a lot of cases. We look at it as establishing that common understanding between both sides so you can communicate what you’re delivering and on the other side, the customer can validate what you said. It’s the trust-but-verify concept. Ultimately that is what drives a common understanding of IP quality.
Matter: IP quality in the selection process really comes down to a matter of completeness – does it represent all the modes, all the states and all the functionality that’s required in the design? The other is the accuracy. Can I rely upon this in terms of the database instruction you delivered to me to actually predict what I can do when I implement my design. So the IP quality that our customers really look for, besides completeness and accuracy, is that there are different strata or layers of IP that I need. For hard IP, which I’ve fully implemented on a given process technology and circuit, I’ve acquired a tremendous amount of quality and completeness because I basically can’t modify this hard macro. If I deliver a hard IP such as a PHY design, and instead of giving a synthesizable core I provide you a hard IP macro core of a core processor, I’m looking at it as though I’m going to hold this to the exact standards for which I did a full custom design itself. If I take another set of IP, which is probably new and fairly pervasive like a USB 3.0 host controller or a new MIPI interface, I’ll get these in a batch of both soft and hard IP. But what I really want there are the sockets—the interfaces I need. Is it an AMBA interface, is it PCI-like, can I enumerate it? What it comes down to is the completeness. There is completeness with respect to my design. The other thing that’s highly critical is, don’t ignore the software. I’ve gotten a lot of IP that looks great on paper from IP repositories all over the place. I can synthesize it and lay it down, but where are my drivers? And if I really want to get IP quality, it’s not just the quality of the physical implementation of the design. It’s the quality of the solution and the IP has to be solutions-based. You have to deliver IP that I can actually develop drivers for on any number of runtimes or OSes. The last part of IP quality is really matching this to our flows. A lot of the IP that we look at today really with respect to power is probably the area where it’s lacking – there aren’t power models. The power state definitions match a specification or standard, but they don’t necessary match the power flow within the design.
SE: Even if it is extremely difficult to have a single standard for IP quality, there was an IP quality checklist in the industry. Is this being followed?
Savage: There have been various attempts at doing these types of things and they’ve had varying degrees of success, but I think the main success there has been just raising the awareness level because VSIA, which has be defunct for five or six years, was donated to IEEE. So there’s kind of a checklist, which is sort of a guideline—it’s a good thing. But the problem with a lot of this stuff is it’s very subjective and not objective. That still serves as a good list of things, [along with] the guideware that Atrenta works on, and TSMC’s 9000 program. We worked with NXP on the core reuse stuff, which at the time was really the most advanced I’ve seen in the semiconductor companies as a corporate-wide check. They’ve used various tools, built wrappers around things, so that is effective for them. Back to the VSIA thing, a friend of mine from ARM, John Biggs, always said that a rule without a tool is not a rule because if you can’t objectively check it, it becomes just a subjective exercise. And it becomes something that’s not repeatable, so therefore it loses a lot of meaning.
Matter: That’s a great point in terms of the verification methodology really has to have a tool flow that supports the verification, instead of compliance checklists relative to this. For IP repositories, you can check anything in, but someone needs to verify it. At the companies I work with—large companies—IP reuse is the mantra because every engineering manager will mandate a high degree of reuse so the people are actually spending time on innovation, not on re-validating designs. There are some macro blocks I won’t touch because they are legacy functions, but once I process shift I have to go back through and rerun all of my vectors to make sure everything is still clean before I even check it in. On checkout, I’m going to have to say, ‘Did I touch this? Did I modify it? Can I modify it?’ If I did, what did I do about any of the changes I made into this?
Sancheti: What you can standardize is the methods and the enforcement of quality. What you can’t necessarily standardize is one uniform measure of quality that says if it applies to a USB core and it applies to SerDes, and a processor, that at least I haven’t seen where you can uniformly establish one measure of quality. But the processes, the deliverables, the methodology and the transparency between the supplier and the consumer….
Kochpatcharin: I’ve got a problem with this because what we see, from TSMC’s standpoint, half of our wafers have third-party IPs, and between 30% and 50% of our new tapeouts—actually now it’s more—have third-party IP. What we see is the lack of uniform validation of the checklists. TSMC has TSMC 9000, and the thing that makes it successful is that we have a team of 30 people that go through the checklist with the IP partners and make sure the customers understand what we are checking.