Experts At The Table: IP

Second of three parts: Changing dynamics of working with customers; risk vs. modification; context; blocks vs. subsystems; consolidation and the role of small IP developers; defining IP quality.

popularity

By Ed Sperling
Low-Power Engineering sat down to talk about IP with John Goodenough, vice president of design technology and automation at ARM; Simon Butler, CEO of Methodics; Navraj Nandra, senior director of marketing for DesignWare analog and mixed signal IP at Synopsys, and Neil Hand, product marketing group director at Cadence. What follows are excerpts of that discussion.

LPE: Are we seeing a blurring of the lines between design teams because of the shift to more third-party IP?
Hand: Customers want off-the-shelf IP. But it’s a very big distinction between becoming part of their design team and working with them all the way through, versus working on an outsourced basis. Customers want predictable, proven IP.
Butler: It may take a month to deliver the library, and then there are all kinds of bugs. So you don’t ship the library anymore. You work in a common environment.
Hand: Most customers don’t have to deal with that level. But the changes you make to hard IP should be zero.
Nandra: In terms of the feature sets and configurations, when it comes to physical IP it’s a well defined set. The work is done by the IP vendor. They’re proving it on silicon. And then what we ship is a black box.

LPE: Isn’t all IP a black box?
Nandra: The softer it gets, the more configurations customers expect.
Hand: With soft IP such as a memory compiler, you essentially have a compiler for that memory compiler. It’s still delivered as code to a customer. They have to do the implementation, so it’s more of a gray box.

LPE: But the trend is still away from touching that, right?
Hand: You still need to go through the physical implementation. You need to take their design context and optimize it or you won’t be able to hit the power, performance and area targets.
Goodenough: You need to supply the IP with the recipe to get people to the context. We provide soft IP and we provide the recipe to go through an implementation flow.
Nandra: The customer touches the IP, but the goal is to configure it.
Goodenough: They’re touching it in a carefully constrained way.
Hand: Some of these IP blocks have a huge number of configurations, so no two deliverables will be exactly the same.
Butler: And you don’t want to create part numbers for these things. It’s better to ship the IP with the tool that helps configure the IP.

LPE: One of the evolving trends in design these days is more rationalized use of resources—only what’s needed. A second trend is to do more exploration, comparing different IP and different implementations. Are those trends in sync?
Hand: It depends on the risk associated with configurability. On soft IP they want configurability because there’s a lower perceived risk. There will be some parts where if you touch the core there’s too much risk. There’s always a tradeoff between configurability and risk.
Goodenough: If you look at an ARM core, we’ll let people modify that cache sizes and take a gross functional block and turn a SIMD accelerator on and off. If you go and play with a new bus topology for an SoC, you have to go validate it, but there’s a lower perceived risk around that. One trend we do see, which our customers are pushing, is to provide larger subsystems. It’s not just the core, but a core and a memory subsystem that’s ready to go out with software that is known to run on it, including the BIG.little switches. They can adopt the correct risk profile that they want.

LPE: That’s basically managing context internally, right?
Goodenough: Yes. The amount of effort are putting into validation, whether it’s logical validation of systems or signoff, or going through the implementation floorplans, they’re equating time to market with something that is known good.
Hand: Unless there’s a material impact. Your customers want to focus on their differentiation. If you can give them something that’s proven and working and not going to impact their differentiation, they’ll take it as it is. If they can turn a knob and lower power, then they want that knob to turn. If you look at SoCs today, about 80% of them are the same. What’s different is how they’re configured, how they’re balanced and how they’re mapped to their customer application. That’s the secret sauce they bring to the table.

LPE: What happens to the IP industry if we’re pushing into larger and larger subsystems that are more contextually aware?
Hand: There will be increasing consolidation. The cost associated with building IP is going up. You can’t bring a small piece of IP to market that people that people will bank on for a 28nm or 20nm chip. That will be a natural process of maturing of the industry.
Goodenough: I agree. It’s a scale problem—the number of people you’re trying to supply while staying on the edge of physics and software. Only a company of scale can do that?
Butler: But the cost of entry for startups is down.
Goodenough: Yes, and you can still do an IP model as a one-off for one company because your context is constrained. You can still see a lot of innovation where there is a constrained problem. The question is how you scale from one to many. That’s a challenge for the classic IP industry.
Hand: It’s similar to what happened in core EDA. Everyone said EDA startups were dead because it wasn’t scalable. EDA startups continue to happen, but once it gets to a point of scale if it’s rival technology then one of the larger EDA companies acquires them. It will be similar for IP. Once it becomes viable and needs scale, either they become a major player with a large investment—something that’s unlikely—or they will be bought up by another company.
Nandra: We se a lot of companies that want to go to a complete chip, and through that process realize they have a lot of valuable IP. They get on the radar of other companies that need that sort of function. Most companies don’t start out as IP companies. They start out doing design services or with aspirations of becoming a chip company, and through that process they build a lot of interesting functions that are valuable to other people.
Goodenough: You see a lot of technical innovation in function.
Hand: If you look at interface IP, that’s one area where there has been a lot of consolidation. Five years ago there were a lot more companies doing standards-based IP. That’s shrinking rapidly. But there are other areas where there isn’t the level of standardization, such as analog front ends. There is a lot of innovation there.
Butler: OpenAccess was a good way of bring EDA vendors onto a single platform. That drove innovation because it meant startups could be on the same database as Cadence and Synopsys and it was easier to plug their tools in. Will there be something similar in the IP world?
Goodenough: That was the original idea behind IP-XACT, which is a meta-data standard. There are some very successful things being done off the back of that, such as the ability to define register maps and take a lot of pain out of integration. IP-XACT is necessary but not sufficient by itself. You need other standards to glue the Legos together. But you also need to put your glue into a modeling environment. Which one do you use? Which synthesis flow do you use? There is diversity, which is legitimately driven because people are trying to optimize design points and cost structures.
Hand: And a big piece of that is defining a quality standard. What is an acceptable quality for IP and how do you measure that? If customers can’t quantify something it’s seen as a risk. As you going forward, the lack of a well-defined quality standard for smaller companies makes it hard to prove to their customers that it’s worth buying.
Butler: Quality is an intangible thing. It’s not clear you’ll ever define it.
Goodenough: And there is no standard integration environment. Putting a standard definition of quality is nearly impossible.
Hand: But even if you solve all those modeling and integration problems, there’s still a question—if you’re a smaller player—how do you prove the quality of what you’ve got.
Goodenough: It’s a business-to-business transaction.
Hand: But then you have a small company dealing with a large company, which is putting its whole future up for grabs based on a piece of unproven IP. How do you prove it? As a small company, that’s a big problem.
Goodenough: That’s back to the trust issue. You trust the guy doing the IP because you’ve worked together for 20 years and they’ve done this before. ARM is a trusted company. We’ll stand behind it, no matter what it takes.
Hand: But it’s easier for ARM, Synopsys or Cadence to build that trust than three or four guys doing consulting and working in a shed.
Nandra: Most of the customers willing to take on a bigger risk are driven by cost. The actual cost is always more because something inevitably goes wrong.

LPE: A lot of this started with very standardized pieces. Those are no longer so standard. Are there new markets shaping up for IP?
Nandra: The biggest growth is in smartphones and tablets. These customers are driving the smaller technology nodes, and there’s lots of innovation at the fabs to get devices to work at these small feature sizes. There’s a combination happening between baseband chips and application processors. Customers are looking at combining the two. That makes a huge SoC, and we all have to work in technology that isn’t very friendly.
Butler: We see the blurring of boundaries. When a company is looking to interface with a vendor on the bleeding edge, there are so many revisions and so much churn as they nail down what the IP needs to be that they need to have a different way of interacting with a customer. Just downloading something from the Web site doesn’t work anymore. You need visibility into the customer’s environment and visibility into regressions in the test environment. One vendor has set up a portal to give their customers visibility into the IP that’s being generated. That way the customers can figure out if the vendor is on track to have something within the promised four weeks or six weeks. It’s having a way to bring the two teams together and build trust, without having to be on site or have VPN access, and be able to abstract out the quality and the progress is happening.
Goodenough: This is trust and concurrent engineering.