Bridging The IP Divide

Part 1: A lot has changed since the emergence of the IP-based methodology and it is currently going through a major update.

popularity

The adoption of an IP-based model has enabled designs to keep filling the available chip area while allowing design time to shrink. But there is a divide between IP providers and IP users. It is an implicit fuzzy contract about how the IP should be used, what capabilities it provides, and the extent of the verification that has been performed. IP vendors have been trying to formalize this as much as possible, but it usually ends up being a document written in English.

A limited number of standards have been created to help deliver this information to IP users and from which assembly and verification tools could be created. Few assembly tools have been successful so far. What is being done to help system integrators and can we expect any significant changes or automation in the near future? The industry weighs in with their views.

The notions of IP and reuse have been around for more than 20 years at this point, so you would think that there would be little controversy surrounding the subject. Hugh Durdan, vice president of product marketing in the IP Group at Cadence, lays out the well-understood rationale for the practice. “At the high level, the fundamental value proposition is that third-party IP is based on efficiency. If you can invest your time and effort in the pieces that will provide differentiation, then that is what you should do. An IP provider has a well-proven, solid piece of design that can be purchased for a lot less than you could build it yourself and without the associated risk.”

Reuse is not always that clean, however. A lot of the reuse comes from internal IP, which gets a lot messier. “The statistics do not do a good job of recognizing how much of the internally developed IP has to be modified,” says Drew Wingard, chief technology officer at Sonics. “That is often for legitimate reasons. It may be dealing with different effects on this design compared to the last one so that you could not have expected the developer to have considered it. It may be a new version of the spec that just came out, or to be able to process enough frames of video to keep up with 4K. A lot of the time blocks get changed through things that could have been contemplated but were not. There may be blocks that were optimized for the previous design but requirements have changed, and that means more changes.”

Quality has always been important for third-party IP and yet there always seem to be complaints about bugs found in IP. “Whenever an IP block is delivered it is usually well defined and verified in a stand-alone manner,” says Prasad Subramaniam, vice president for design technology and R&D at eSilicon. “But they have no idea how the IP is going to be used, and every user is probably going to do something different. Most IP is standards-based and has well-defined protocols and interfaces, but there are still a lot of things that can go wrong when you connect that to the rest of the world. It is practically impossible for anyone to verify all of those combinations. Even though the IP may conform to the standard, there are other signals that are not covered by the standard interface.”

Mike Gianfagna, vice president of marketing at eSilicon, says the system integrator who is bringing IP together from multiple sources has the most difficult job. “They are dealing with the known problems, but it is the unknown problems that will bite you. In complex chips there are always unknowns that you will find out about the hard way. There is no advanced SoC that is not pushing the envelope. There is usually something that has never been done before.”

It is the notion of pushing the envelope that is often at odds with the notions of reuse. “IP reuse has never really taken off as one envisioned, despite progress being made on a number of fronts,” says Ranjit Adhikary, vice president of marketing for ClioSoft. “Examples of progress include new formats such as IP-XACT, (IEEE 1685), IP integration tools such those from Duolog (now a part of ARM), or from Atrenta (now a part of Synopsys). IP management systems also have not worked very well and essentially have been used as a catalog for IPs within an enterprise. The plug-and-play concept that everyone envisioned has never really kicked off.”

In addition, integration is filled with uncertainty. “From the integrator’s perspective, you are learning new things as you do the integration,” says Subramaniam. “Even when the IP is well-documented, it is still difficult to understand everything about it and how to integrate it.”

Read the @#$% manual
How many people actually read the manual? “We are trying to put in place a lot of documentation and tools to make it easier for the customer to go through the integration tasks,” says Cadence’s Durdan. “When we provide IP we also provide a testbench. We provide our own verification IP along with the design IP so they can verify its integration into the system. We provide signal integrity models for board and package design. These make it easier for the customer, and the documentation contains a lot of guidelines for how to do things. But it is a very challenging task for the customer. I don’t think it will ever be easy.”

There also may be issues with the correctness of the documentation. “Historically, technical documentation has been the source of extremely useful, albeit often stale information about the IP,” says , CEO of Agnisys. “The biggest problem for IP integrators is not having information about the IP in an executable format. They are provided with a PDF file. That is a challenge and creates a communications gap.”

There is little objection to wanting to find ways other than documentation to transfer knowledge. “We want to explain to the integrator in an IP, protocol and a system manner, the impact of everything,” says Ralph Grundler, senior marketing manager for IP at Synopsys. “You can try and do that with documentation, but nobody wants to read that. So you need some pre-built examples and you can communicate through that example and talk about the different tradeoffs and optimizations.”

Part of that transfer of knowledge is related to interfaces that are becoming increasingly complex. “When going from 5GHz to 8GHz interface, it may totally change the way in which you transmit data and the way that the data is verified,” Grundler says. “You have to add functionality into the protocol. Consider PCI Express gen 2, which is very similar to gen 1. For gen 3 they had to rewrite the spec and the PHY design in order to meet the goals. They also had to look to the future and realize that they would have to get to 16GHz, which is the theoretical maximum, so they included some of the necessary functionality for that. All of a sudden, this core gets a lot more complex and has to support a different protocol for these higher speeds, so the size of the design has doubled.”

Too much configurability
Configurability is necessary for an IP block to be usable in enough designs to effectively spread the non-recurring engineering (NRE), but this also can create problems. “Making an IP block configurable for every different system is great, but it also creates a problem for the end customer if they are not familiar with the protocol,” explains Grundler. “How do they go about configuring it for their system? How do they integrate the IP that is created into their SoC? What we concluded is that we had to hire system-level people who had integrated our IP and had experience doing it to understand what they were going through.”

Cadence’s Durdan goes even further. “With some of our IP we went overboard on configurability, and now we are backing off from that and constraining the configuration space. It can be overwhelming for customers. They want a PCI Express controller and we would give them a spreadsheet with 200 line items of configurations options to fill out. Even communicating what those knobs do and determining how to set them for their application became a very challenging task. What we are now doing is narrowing down the configuration space to the things that really count, and providing some tools and ability for customers to pre-configure and do benchmarking for a number of options so they can find out what works best for them. Once they have decided on the configuration they want, we build that IP for them.”

Still, finding that right set of parameters can be a challenge, says Wingard. “What are the interfaces on your design? What protocol does it use? What is the data width? You can’t answer many of these questions without thinking about the system you are building. You can’t define the parameters needed to generate the IP block until you understand the chip. We ended up building a tool environment that blurs the boundary between configuring the network or the power manager versus integrating the chip.”

It appears that for some IP blocks, it is cooperation between supplier and vendor that has the best chance of success. “We have been growing up with these protocols and we know every nuance and detail,” says Grundler. “We are providing our intelligence to the problem from a systems point of view so that the customer no longer needs to worry about it. Instead, they just have to know what they want to achieve in terms of performance and the SoC requirements, and then how to integrate and test that protocol within their system. Instead of asking, ‘Do I want to use bypass or cut-through on PCI-express?’ the question becomes, ‘Can I afford this much latency to save gate count with a similar performance?’ or ‘Do I need to reduce the latency to achieve this performance’? We can discuss those tradeoffs and explain the options.”

The action of delegating parts of the chip to a third party also implies a transfer of knowledge and trust. “Engineers who buy and integrate IP just don’t have the expertise,” continues Grundler. “By us providing that as a service to customers and being able to customize it for their SoC, both people can achieve their goals.”

Some IP can be connected through standard interfaces, and several of them can be connected together to make subsystems. While this may ease the final integration challenge for the systems integrator, there are tradeoffs. “You can’t have a one-size-fits-all subsystems, so you need to have a programmable system where an individual can put together building blocks to create the subsystem,” says Subramaniam. “They have solved the problem by using standard interconnect interfaces such as AMBA or AXI or an on-chip communication interface. It does make sense, but it does make compromises.”

Subramaniam explains the compromise: “It is a good architecture in that it gives you the flexibility of configuration and allows you to easily interface various blocks together to create a subsystem. However, there are times when a customer does not want to use that interface and wants to directly communicate with certain IP blocks. The reason they want to do that is because the overhead of the interface will impact the performance of their design. They want a more intimate interface, and this is more prone to error and more difficult to integrate. It is a tradeoff between ease of use, ease of integration, performance and implementation overhead.”

Part two of this article will address tools and standards that can be used to bridge the IP divide, and the implications for IP due to the emerging 2.5D integration methodology.


Related Articles
IP Risk Sharing
Who is responsible for automobile safety? If you want to play in the industry you have to take on some of the responsibility.
IP Requirements Changing
More accessible data and hooks needed to integrate dozens of complex black boxes.



Leave a Reply


(Note: This name will be displayed publicly)