Bridging the IP Divide

Second of two parts: The creation of tools and standards for IP integration is progressing at a snail’s pace, but there is hope. New standards and fabrication methodologies may cause a disruption.

popularity

IP reuse enabled greater efficiency in the creation of large, complex SoCs, but even after 20 years there are few tools to bridge the divide between the IP provider and the IP user. The problem is that there is an implicit fuzzy contract describing how the IP should be used, what capabilities it provides, and the extent of the verification that has been performed. IP vendors have been trying to formalize this contract, but most of the time it ends up being a document written in English.

Part one described how the relationship between provider and integrator is changing for some of the blocks. The ability to correctly configure an IP block assumes a level of knowledge about the protocol or interface that the integrator may not have, or want to have. Instead, IP providers are translating technical features into the impact that making certain decisions would have on the complete system and providing the integrator with a semi-custom block tuned for their application.

There are also some blocks for which standards and tools are proving to be adequate. “Some IP is well encapsulated and manageable,” says Farzad Zarrinfar, managing director of the IP division at Mentor Graphics. ” IP is one example of this. Modeling is fairly straightforward, and from a front-end point of view the compilers generate everything such as datasheets, Verilog models, SDF back-annotation timing and LEF models.”

Standard issues
But for most blocks, what should be a simple task of connecting the IP blocks together becomes a minefield of potential mistakes, silly errors and misunderstandings. To tackle this kind of problem, a standard called IP-XACT was created. IP-XACT was developed within the SPIRIT Consortium and describes an XML Schema for meta-data documenting the IP. SPIRIT was consumed by Accellera in 2010. The last version created by SPIRIT became IEEE 1685-2009. A set of vendor extensions was published by Accellera in 2013 and errata published by the IEEE in 2014. Accellera is currently collecting requirements for the next version and working on a document to help promote adoption.

But IP-XACT has not lived up to the industry’s expectation. “Standards, such as IP-XACT, help to some extent but they do not address the entire space of the IP,” says Prasad Subramaniam, vice president for design technology and R&D at eSilicon. “They are isolated aspects of the IP. The problem is that it is difficult to standardize something that is so diverse. Each IP is unique in its own way. Try and come up with a standard that encompasses them all, which is what IP-XACT tries to do. It is difficult. You can only attack certain aspects of the problem. There are many areas that are still left open, including how you control the IP.”

This has led to reluctance in the industry to provide IP-XACT description, and for vendors to provide tools and environments that can utilize the information. “IP-XACT and other mechanisms exist, but typically they do not have the infrastructure in place to receive that kind of information,” says Ralph Grundler, senior marketing manager for IP at Synopsys. “Not a lot of people are using IP-XACT, and while we output this information we have not found huge adoption.”

Some companies are trying to make it work but have problems. “When used, it does not solve the entire problem,” adds , CEO of Agnisys. “Registers are there and pins are there so you can connect things. It does a good job for the things that it does, but there is complexity even in the register space where so many special things can happen. To describe these things we need to use the vendor extension capability, and this is both good and bad. People can put anything in there, and the bad part is that people can put anything in there.”

Bakshi observes that IP-XACT is more likely to be used in large companies that have spent the time and money to create internal flows, but does not see as much adoption within smaller companies.

There are other standards that should help with integration, but they too appear to have problems. “The widespread adoption of the Universal Verification Methodology (UVM) standard for both IP and chip verification has made some types of reuse more feasible,” explains Tom Anderson, vice president of marketing for Breker. “Unfortunately, the UVM falls short when it comes to stimulus reuse. If the IP block has a standard interface, such as USB or PCI Express, then perhaps the stimulus generator can be re-used in the full-chip environment. However, any IP interfaces that are ‘buried’ within the chip after integration can no longer leverage any active testbench components such as stimulus generators.”

Even though Anderson is critical of UVM as a whole, he does see that various parts of the testbench can be re-used. “Static elements of the IP environment, such as assertions, protocol monitors and scoreboards, can often be re-used as part of the chip-level verification environment once the block has been integrated into the full chip. The single most important thing that an IP vendor can do is to provide a protocol monitor, for example a set of assertions, for all inputs to the IP block. If the user violates the intended use of the block in any way, these protocol checks will flag the misuse. The IP vendor should also consider shipping internal and output assertions. These provide backup in case of an incomplete input checker and also help in diagnosis if the user triggers an actual bug in the IP block.”

One area of hope is for a newly released standard for describing power intent. “UPF 3.0 (IEEE 1801) may be the exception when it comes to IP integration and most companies have some kind of UPF flow in place,” says Grundler.

Tools are lacking
Most of the time, initial problems with standards are fixed when tools are created that identify problems. This creates the necessary incentive for the standard to evolve. “The business for selling integration technology and tools is a difficult business,” says Drew Wingard, chief technology officer at Sonics. “Beach, Duolog, Magillem and a few other have existed, but that business is constrained by the total number of integrators on a chip project and that number is pretty small. So the challenge for making an EDA business out of integration is that there are not enough seats. They will not pay a couple of hundred thousands of dollars for a tool that only one or two people get to use.”

The problem is also evolving, perhaps faster than solutions can be found. “The tools continue to get better but the task continues to get harder,” points out Hugh Durdan, vice president of product marketing in the IP Group at Cadence. “The scale of the design that can be done in a 16nm device is huge.”

But what if a few design constraints could get most of the problem solved? “A system integrator tool would need to be able to connect pieces of IP together seamlessly, and most of the tools rely on known standards such as on-chip interfaces, bus interfaces and protocols,” says Subramaniam. “As long as you confine yourself to those standards, you can do a reasonable job, but that does not address all aspects of the IP. Because of this, the tools can never automate the job and it is very complex.”

Durdan agrees. “There have been things that are making it easier. An example is that a lot of IP is standardized on the ARM AHB bus as the interface to the IP. That makes it more plug and play than in the past where every piece of IP had a unique interface. IP suppliers provide discrete pieces of IP and it is an exercise for the customer to stitch them all together. We have made that easier by having common interfaces and providing IP-XACT descriptors, but it is still a lot of work.”

The more constraints that are placed on the system, the more likely it is that the problem can be solved. For example, an FPGA limits the architectural freedom making the development of tools simpler and cheaper. “FPGA companies have built some of the most attractive, push button automation tools,” points out Wingard. “But these companies make money from the end product, not from the tool. They have to give it away to sell the FPGA.”

Processor vendors are providing tools that will help interconnect their IP to other devices. Andes supplies Knect.me that provides software stacks, tools and applications tuned to help customers create IoT types of devices. “We’re starting to see new categories that we’ve never seen before, such as the housekeeping market,” said Frankwell Lin, CEO of Andes. “So you have a core embedded in a part that you use to manage incoming information rather than booting up the whole system. This kind of category is new.”

Others also see value in ecosystems. “What is needed is a bridge between the IP developers and IP users to provide an ecosystem where designs can shop, consume and produce IPs,” says Ranjit Adhikary, vice president of marketing for ClioSoft. “The ecosystem should enable designers to communicate with the IP developers and IP users in a transparent manner to better understand the suitability of an IP and to discuss any issues users may have faced.” This ultimately appears to confirm the conclusion that IP developers and consumers need to come closer together and cooperate in the customization and integration tasks.

Standards could be extended to make more of the task automated. “IP-XACT stops at the registers,” says Bakshi. “Even System-RDL does nothing to say how to use those registers. We want to capture how the device is used and this includes how the registers are programmed. It also includes additional signals and traffic on the interfaces. Register setting and configuration is a major part of it.”

New hope
Perhaps the brightest hope is coming from an unexpected source. Accellera is working on a standard that will most likely utilize graph-based models. If you can get beyond the highly unsuitable name, the standard may promote not only a universal verification methodology but also an integration methodology. “Graph-based models that support portable stimulus are very helpful in fulfilling the need for a more formal definition of intent,” says Anderson. “They provide a common method for all IP vendors and IP users to document verification intent and enable reuse from block to chip to system, and from simulation to silicon. The flexibility of graphs means that it is easier for an IP vendor or IP user to adapt the verification environment as IP blocks are configured or customized.”

Bakshi sees additional benefit. “Portable stimulus is going in the right direction. Ultimately, if the registers and sequences are defined for an IP block, then it will enable the higher level tools to use that information to help exercise them. So this becomes the necessary layer around the IP. It is not only the handoff to the IP integrator. It also provides horizontal reach across all of the layers of design. In a specification-based flow, the same specification gets used to construct the RTL, the firmware, verification etc.”

Disruption ahead
Just as a solution looks as if it may emerge, the problem may change again. 2.5D integration is getting much closer to reality and may transform the IP industry from one of soft IP and interfaces to a world where IP is not only hard, but pre-manufactured and integrated in much the same way as PCBs have integrated chips for the past decades, except using a silicon interposer.

“Once 2.5D becomes mainstream, we see opportunities for certain types of IP to become hardened into silicon and made available as tiles that can be integrated onto a substrate,” says Subramaniam. “The beauty is that it is silicon proven and the risk is completely eliminated. It also offers the ability to build IP in the technology that makes most sense for that IP.”

The real obstacle has been the cost structure. “Up until now it has only been suitable for very high-end devices,” says Durdan. “With some of the new technologies that are coming out it is moving to a more attractive cost point which will enable some new and unique designs to take off.”

2.5D may change many of the integration issues. “Many of the issues become more ‘political’ than technical,” says , CEO of Teklatech. “The responsibility for the design of the individual dies and their integration on the package-level substrate typically lies in separate groups. It’s an over-the-wall approach to integration. This can create problems. For example, dynamic power issues caused by the accumulated effect of peak currents from multiple dies, which could have been handled at the chip-level, become the problem of the package-level integration team. But the chip designers are meeting their targets and have no drive to reduce peak in-rush current further.”

It is clear that there is another chapter of this story that has yet to be written. The IP integration challenge has been there for 20 years and so far the problem has continued to move ahead of the standards and tools. Perhaps we are getting close to the point where a new type of company will have an economic desire to solve the problem, and the problem will be taken out of the hands of EDA.

Related Stories
Bridging The IP Divide Part 1
A lot has changed since the emergence of the IP-based methodology and it is currently going through a major update
IP Requirements Changing
More accessible data and hooks needed to integrate dozens of complex black boxes.
IP Risk Sharing
Who is responsible for automobile safety? If you want to play in the industry you have to take on some of the responsibility.



1 comments

Ouabache Designworks says:

I must disagree with the statement that IP-XACT has not lived up to the industry’s expectations.

Back in the 80’s the Printed Circuit Board industry had a problem in that each tool vendor used their own proprietary formats so that you could not take a design created by one vendor and use it on
another vendors tool. So they came up with EDIF (Electronic Data Interchange Format) to solve this problem.

But EDIF failed because each vendor would export their designs in their own particular “Flavour” of EDIF that nobody else could read.

Twenty years later the spirit consortium comes along to solve the same problems for IC’s and fails because they use the same solution as EDIF.

Did anybody in the industry really expect that changing the name from Vendor Flavours to Vendor Extensions was really going to fix anything?

The reason that both standards have failed is because both committees have failed to understand a fundamental principle of tool design.

There are two different approaches to problem solving. You can have an “a priori” solution or an “a posteriori” solution.

“a priori” means that you have complete knowledge of the problem from the beginning while “a posteriori” means that there are pieces of the puzzle that you will not know until later.

“a priori” is the best approach but it only works in very stable small data environments. When your design environment becomes a big data problem then you must switch over to an “a posteriori” solution.

Both standards fail because of issues that were not known at the time the standard was released. Doing a newer release will never work because with big data there are always new issues. They need to come up with an “a posteriori” solution to the problem.

The spirit consortium knows how to do this. Signal name management is already handled a posteriori in IP-Xact
so that you don’t wind up with a design that has 6 different ways to spell clock. They should apply these techniques to the rest of the standard.

John Eaton

Leave a Reply


(Note: This name will be displayed publicly)