The Quest For A Better IP Integration Methodology

As the amount of IP inside an SoC grows, so does the need for a systematic way of integrating it—and checking it.

popularity

By Ed Sperling
With the amount of IP in SoC designs now hitting an estimated 70% to 90%, companies are scrambling to figure out a way to more consistently integrate that IP and to test that it will work as expected.

This is easier said than done, however, for a number of reasons:

  1. There are numerous types of IP, ranging from I/O to logic and memory.
  2. Not all IP is of equal quality.
  3. Not all IP is used the way it was intended, or even consistently from one chip to the next.
  4. Re-use within companies of their own IP frequently doesn’t conform to any standards.

So far, standards efforts in this area have been relatively modest. The SPIRIT consortium introduced IP-XACT to document IP and provide tools to access meta data, but that’s a far cry from a consistent methodology for integrating IP.

“In the old days all you had to do was characterize the IP,” said Jean-Marie Brunet, director of product marketing for model-based DFM and place-and-route integration at Mentor Graphics. “Now you try to create context with lithography and stress. You need to instantiate the IP in corner cases and the surrounding context. It’s random at this point, which means there is not a lot of predictability.”

That becomes even more critical at future nodes. At 20nm, for example, double patterning makes IP even harder to characterize and re-use. And fill at 28nm and 20nm can have an effect on density, which in turn affects min/max values. That also has an effect on IP.

“These are problems for the IP creator and the SoC integrator,” said Brunet. “You almost need a ring around every IP, but that blows the area. And double patterning is not done the same way from the IDM to the foundry, so you need a situational solution for each version.”

There also has to be a better way of defining what is good IP. A piece of IP that functions perfectly in one design may not function the same way in all designs because of issues ranging from noise—a problem that has been particularly acute for RF and some analog IP—to electromagnetic interference, physical stress and exactly how the IP is used.

“The big issues we’ve found is that different IP is being delivered in different states of readiness and quality with a different understanding of what it means to actually be IP,” said Neil Hand, group director for product marketing in Cadence’s new business group. “Today when you deliver IP you do some amount of generalized skeleton code, floor planning and I/O placement. But there is a lack of consistency in this.”

He noted that at 70% to 90% IP content in SoCs, any amount of overhead in making IP come together and work properly is unacceptable. “What’s needed is to unify the delivery of IP. After that, everything falls into place.”

Verifying quality
Behind that delivery is a need to have more consistent quality, which means the IP can be used under a variety of circumstances and still work as planned.

“Integration is an issue, but the bigger problem from a customer standpoint is to figure out which IP is good and which IP is not good,” said Gideon Intrater, vice president of marketing and applications at MIPS. “The risk is huge. What you’re looking for is IP that is isolated enough from the rest of the system. With sensitive analog or RF you still want to be able to drop it into the chip and have enough rules in place for using that IP. But you also have to consider that the more aggressive the process technology, the more IP you put in a chip and the more power and power rails, which are noise—all of that is going to impact how the IP behaves.”

IP certainly needs to be tested once it’s in a design, but it also needs to be tested and properly characterized well before that. Large IP vendors typically build reference designs using worst-case scenarios to test the limits of their products. With Synopsys’ DDR3 and DDR4, for example, the company has built the memory into what Navraj Nandra, Synopsys’ senior director of marketing for DesignWare analog and MSIP IP, calls “cheap and nasty packages.”

“What we don’t know is how the customer will implement IP inside an SoC,” Nandra said. “But there is a lot you can do to mitigate potential issues if you know what they are.”

The largest merchant IP vendors—ARM, Synopsys and MIPS—all use this method of testing all possible configurations and developing data sheets for problems that can erupt along the way. Jack Browne, senior vice president of sales and marketing at Sonics and a former executive at MIPS, said that once an IP company has more than 20 customers and has developed more than 5 to 10 products, it has figured out the quality issues. “As customers do their second and third transaction with an IP company, they’ve got the quality issues worked out on their side, too.”

Internally developed IP and most custom-built analog IP rarely have that kind of information available, however. And as companies attempt to move their existing IP to the next process node, or when they attempt to use the old methods of putting in IP blocks as it becomes available, problems can erupt that no one ever considered.

“The interconnect ends up being the sticky point in chips,” said Kurt Shuler, director of marketing at Arteris. “If you use Wide I/O to memory on a mobile phone you get better bandwidth, but the question that has to be answered is where you put everything. You need to floor plan all the IP blocks earlier. And often the people doing the interconnect and the people doing the IP don’t understand the IP inputs as well as they need to.”

Future directions
The question now is just how much IP will be sold pre-integrated as subsystems or even as complete die for use in 3D stacking.

“The methodologies for putting subsystems together and SoCs together are not all that different,” said Ajoy Bose, chairman and CEO of Atrenta. “There is some methodology in place today, even if it involves homegrown solutions and scripts. What’s more of a challenge is trying to fit your own ideas into an existing situation.”

That’s been the problem with commercial IP from the start. It’s possible to write IP specifically for an SoC design that is smaller in area, uses lower power and has no proximity issues because it is developed for a specific design. But getting the design out the door on time using internally developed IP is impossible.

“Right now you create IP, sign off on that IP, you import the IP, validate it in an SoC and hand it off to implementation,” said Bose. “This is similar to what the enterprise software industry was doing with analysis, human resources and inventory. Then enterprise applications were created to connect all the software together into a single integrated package. We’re seeing the same trend in IP with the subsystem becoming more popular. It helps that the semiconductor companies are aligning themselves vertically, too. With each vertical they know the pieces that are used.”

In many cases this job will fall to value-chain producers such as eSilicon, Open-Silicon and Global Unichip, which are among the largest commercial IP integrators and testers. Kalar Rajendiran, eSilicon’s senior director of marketing, said his company has developed a four-step methodology for selecting, managing, integrating and testing IP. What’s important in this process is an understanding of how that IP performs in chips over time and for multiple customers.

“The really heavy lifting is in selecting the IP,” he said. “Choosing IP suppliers is very important. Once we qualify the IP we document it in a database with version control. We also audit the supplier’s methodology—what they use to develop and verify that IP—and we do a site visit to the IP supplier to meet with them. We’ve been doing this for 10 years. We have proof points about why not to go with certain supplier. In some cases it’s because they cause problem for other industry players.”

At this point those kinds of capabilities are a competitive advantage and problems on the integration and testing of IP loom large for many companies. That may change as the IP industry continues to consolidate and tools become available, but at that point the problem also may be less about integration than on customization of IP for specific needs.