Experts At The Table: IP

First of three parts: Context with power, performance and area tradeoffs; who’s responsible when something doesn’t work; characterizing IP; fear of getting things wrong.

popularity

By Ed Sperling
Low-Power Engineering sat down to talk about IP with John Goodenough, vice president of design technology and automation at ARM; Simon Butler, CEO of Methodics; Navraj Nandra, senior director of marketing for DesignWare analog and mixed signal IP at Synopsys, and Neil Hand, product marketing group director at Cadence. What follows are excerpts of that discussion.

LPE: Where are the problems with IP?
Nandra: Customers are asking for IP blocks to be in the leading-edge technologies. You’ve got high-performance requirements for analog to reside in an SoC, which is designed for digital performance. Our customers are asking us to follow all the digital scaling trends without sacrificing performance. On the soft IP, there’s a lot more complexity in the functionality. There are requirements for PCI Express Gen 3 and USB 3.0. The complexity is increasing significantly. Plus, a lot of these things are standards-based but they want differentiated IP for power, area and performance.
Butler: The complexity of the systems is increasing. Assembling the IP and managing all the different interfaces and the various deliverables for the IP is becoming a real challenge. As these complex SoCs begin to integrate third-party IP, as well as the IP developed in-house, there’s no one person who has a full understanding of all the deliverables. You may have a person who understands the analog space but not the RTL requirements. There are a lot of derived views. One of our customers has 108 views for a single IP. When it comes to promoting that IP up to the SoC level they’re asking for automation around that and an integrated verification platform that can gauge whether a particular change fits with the consistency checks across the views.
Hand: One of the big changes is the scope of what is expected to be covered in a piece of IP. As the amount of IP being used and the complexity increases, so does the scope of a particular piece of IP, both in terms of how much functionality it covers and the verification environments. A big part of that as you get more IP you have to move up a level so each piece is more manageable. Otherwise the integration of the SoC becomes an intractable problem.
Goodenough: The main change we’re seeing is that IP is expected to operate on the bleeding edge of physics and software. We see a twin challenge to make sure the IP is validated, packaged and fit for purpose in those two domains. We’re doing that in an environment that now has pace—the level of pace that’s required in engineering with the IP consumers is the key differentiator. You’re concurrently developing the IP with your lead customers, who are on the edge of physics and on the edge of software. IP is becoming less of a nice little box and more of a concurrent engineering process. We see this trend that a lot of activity in IP re-use assumes a stable world, and the world is not stable. Things like change management—managing ECOs, configuration management, managing patch levels—that’s where all our focus is. We can define what RTL is. We can define what a piece of verification IP is. But there is never a stable definition because everything is evolving.

LPE: What you’re talking about is context. The context is more complex, right?
Hand: That’s correct. You may want to explore the PCB environment it’s in and do a signal integrity analysis to make sure it all works. Other customers want it all to fit into a virtual system model to combine with the rest of their IP. Everything is becoming much more concurrent. The good news is it’s driving a lot more of the EDA tools and technologies that have been out there for a while.
Nandra: When your customers are challenged with really short product cycles, they want the IP quickly—even when the technology is not stable. We’ve started designing 28nm and 20nm IP with very early versions of the PDKs. It’s a mini-context. You have to design in an environment where the stuff around your IP isn’t stable. When it gets into an SoC that’s another context, where you have to figure out noise and coupling and SKU. And there’s a context above that, at the system level, where people have to figure out how the package and the lead frame relate to the IP and how that relates to the SoC. It’s almost like multi-context. But IP is at the lowest end of the food chain. If there’s a problem, you get the phone call first. A lot of time we find problems in the cable or the connector or the board, but we’re the ones who have to figure it out. The upside is we learn a lot about cables and connectors and boards, which is critical to our IP business.
Hand: If you look who’s buying IP today, a lot of times it’s customers who never bought IP in the past. Now you’ve got standard interfaces that don’t add value for the customer to build themselves. What’s changed is that in the past the IP market was one step behind. Now it’s at the leading edge.

LPE: But not everything is always on and off. Sometimes it’s somewhere in between.
How does that affect context?
Goodenough: One aspect of IP quality is whether it is functionally fit for purpose. The scope of environments you’re trying to validate for scales up. If you take BIG.little, you’re validating a multicore system that’s interacting in complex ways with BIG.little switches, hypervisors and operating systems on top of that. As an IP provider, you’re now anticipating the environment your IP will be deployed into. Otherwise, everyone will be pointing back to the IP provider if there’s a problem. If you don’t understand the context—complex software and physics environments—you don’t know whether it really is your problem. ARM works in partnership from the applications developers down to the foundry. A key part of IP is being able to understand the context and marshal the ecosystem, not just today but to what it’s going to be next year. With a big multicore system running the latest version of Android in someone’s SoC and it’s just fallen over, who’s problem is it? We’re putting a lot of emphasis on system debug and system finger-pointing. One of the biggest challenges on schedules is trying to triage the debug and find out where the problem is. It may be an SI problem on the board. It could be in the driver.
Hand: That’s what’s driving a lot of it. If a customer outsources a piece of IP, they’re also outsourcing their core expertise in that area. Who are they going to lean on for their expertise? It will be the IP provider. The IP provider does have to understand the whole concept. You do have to become the expert.
Butler: Yes, you become the fall guy.
Hand: You are the expert and you quickly have to get to the cause of it. If it’s your problem the customer knows you will fix it quickly. If it’s not, the customer knows you’ll determine it’s in a specific area.
Butler: So how do you monetize that kind of expertise?
Hand: It depends on the context of what’s going on. With leading-edge IP, there’s a larger business agreement because you are assisting them with that. But it was no different when verification IP started. When something died, the first assumption was the verification IP was bad. This isn’t different.

LPE: Is IP really being characterized properly?
Butler: No. One of the problems we see when we look at the design methodologies inside big SoC houses is they’re looking for a continuous build approach to hardware design because they have so many software and firmware variants they’re using to make their offering unique. What they’re finding is just doing the validation is a huge problem.
Goodenough: We internally and externally see this as a configuration management problem. At one time when you looked at configuration in an SoC it was all about how to rapidly do X, Y or Z. Now the hardware is pretty much fixed. You’ve turned this piece off, you’ve tied this one off, and now it’s a different software stack in the mobile space.
Butler: And there is so much complexity in all these different levels that people are scared to release blocks because they worry they’re the ones who are going to break it. They don’t have visibility across all the various pieces. The tools are still catching up, particularly when it comes to hardware-software compatibility. It’s kind of a black art.
Nandra: Each customer has a different constraint file set up. You have to shift those unique constraints to that customer. An interesting statistic is that it can take up to a month to download a library. Those databases are getting huge.
Goodenough: The file sizes are terabytes.
Nandra: The corner sets are becoming unique. You have constraints and corner sets and all these environments they’re looking at.

LPE: What’s the solution? Is it to provide more context or more pieces, such as subsystems?
Hand: It’s a combination of both. One part is the pieces will get bigger as a natural evolution. The other is giving people tools to explore the context, whether it’s hardware or software or co-verification. A third part is a way of capturing the metadata that defines that IP within a different context. That way you have a way of exploring the architecture with the metadata that defines this level.
Butler: The barriers are getting blurred and the IP provider is becoming an extension of the design team. It’s starting to sound like an outsourced design environment.
Nandra: The customer is expecting you to be part of the design team until the product gets out the door.



Leave a Reply


(Note: This name will be displayed publicly)