Approaching IP Quality From Many Angles

It takes an ecosystem to ensure IP quality. The IP providers, EDA tool vendors and foundries all play a role.

popularity

As SoC design complexity has increased, semiconductor design IP and the industry around it has grown in its level of sophistication. This is great news for the users of that IP whose demands for quality, reliability and other deliverables have also been on the rise.

Making sure users have what they need requires close collaboration between the semiconductor foundries, IP providers and of course EDA tool companies. But the engagement of the players in the ecosystem is very different across the spectrum of types of IP, according to Cadence Fellow Chris Rowen because there is an enormous range from subtle high-frequency analog design like PLLs, SerDes, DRAM cells or SRAM cells, etc., which are so intimately tied to the process parameters. This is one end of the spectrum.

At the other end of the spectrum is the pure logic IP where the issues of process dependence are really captured through the libraries. Sometimes those libraries are created in collaboration with the foundries. “You don’t have to talk with the foundries about how their NAND gates work. They just work. Somebody else already sweated that one,” he explained.

In the quest to address IP quality, there has been a necessary and largely successful step up in the investment and infrastructure IP providers are creating in order to have something that is really reliable for their customers to design in, Rowen observed. “It’s so important because as the number of IP blocks grows and the level of abstraction at which the customers use it goes up, then the IP provider must step up dramatically in terms of level of quality that they provide. Quality means all of the dimensions of it—does it work properly, is it specified completely, is it robust across applications and process technology and environmental conditions and increasingly, does it have all of the software around it that you would expect to make it useful? In the old days people may have thought of a quality standard which was sort of comparable to, ‘If I had a team of people designing this block of RTL, am I at least as good as what a random team of people might be able to create in terms of correctness?’ We’ve gone well beyond that because the number of times that a given piece of IP is going to be used/sold/licensed on the open market is typically much higher than a given block. Therefore, to guarantee that it works under all circumstances that it might be used is just a tougher standard because it’s going to get used so much more.”

Also, because the SoC integrator increasingly is thinking about system-level issues/high-level issues/integration issues, they are less willing and in some cases less able to understand the details of what is happening inside of a PHY circuit or inside of a microprocessor pipeline, he said. This means they are less well equipped to do any of their own testing, their own debugging, their own evaluation of the quality of the IP that comes in so they are much more heavily dependent on high quality.

Connected to this, the IP user needs to fully specify what they need and make sure that when they evaluate a piece of IP, a block or a subsystem that they are looking up front at all of the dimensions of the specification and asking for an unambiguous specification in all of the ways that they care about. But this is genuinely hard to do, particularly if it is a new category of IP. “Suppose something used to be hardwired and it is now programmable. To have that team that is assessing it understand how the compilers need to work and what libraries are required and how it integrates into the debug environment and how exactly bring up is going to work now with a programmable thing, and what kind of drivers do you need? There are so many dimensions of how a piece of IP gets used, it’s actually surprising when a user has anticipated everything beforehand,” Rowen continued.

That said, he believes it is up to the IP provider to meet that specification and to ask the hard questions. What is it that you mean by this? Don’t you think you should be anticipating that? How are you going to do self-test? How are you going to do characterization? How are you going to program it? And how do you make sure that the customer is led to making the right specification? “If you get the specification right, then it’s a heck of a lot easier for the supplier to check all of those boxes. Almost all of the IP quality problems that I’ve encountered over the years have at their root someplace some misunderstanding either about how the IP was going to be used, or incompleteness in the specification of what was being ordered, much more often than the specification said that it would do X and instead it did Y under well-defined conditions.”

Not just a front-end issue
At the same time, Carey Robertson, director of product marketing for Calibre at Mentor Graphics observed that IP quality is not just a front-end issue. Traditionally, the foundry sat in the middle between the IP providers and EDA tool companies. There were requirements of the EDA tools, design rule manuals, and specs that they wanted all designs to adhere to which the EDA vendors would use to develop tools. The designer would then take their design, run the EDA tools with process-node-specific rule decks that adhered to the manufacturing requirements of that foundry.

“On the other side,” he explained, “the foundry would talk to the IP providers and say, ‘You are an IP provider, you want to be validated against my process or you want me to certify that you are good for this particular node? We’re going to have a series of requirements, and among those requirements you need to work with a verification tool and an extraction tool and an analysis tool against our plan of record to make sure that you are complying with the checks we are mandating for all designs.’ So it was a bit hands-off. The foundry was in the middle. They would certify the EDA tools, they would certify the IP vendors and in a perfect world everything would be fine.”

However, this started to break down. “About five or six years ago we started to find that if we don’t collaborate better, while this sounds good in theory that you have this DRC flow that every circuit designers should use, IP or SoC, and as long as everyone is DRC clean or verification clean then everything should work. But it doesn’t always work for two reasons. One is what we call waivers. An IP vendor may have a waiver to say they have a special way of developing the transistor or a MOSFET and that is their secret sauce. That is their core competency and they’ve worked with TSMC and said this doesn’t adhere strictly to the design rules and asked if they would waive this error. We will stand behind the quality, etc.,” Robertson noted.

Second, if this ‘waived’ IP is inserted into a larger chip and there isn’t a good handshaking, then the IP that was once clean now violates some design rule constraints at the top level. Yet that IP is frozen and no one is going to go back in and fix it. So whose problem is it? Is it the foundry’s problem to change their design rule manual, or is it the IP provider’s burden to change their IP, or is it the customer’s problem to change how this interacts at the top level?

There are two ways to deal with the breakdown. One is support of a waiver flow that captures the necessary notations about the waivers in the design. The other is support of reliability checking within the Calibre PERC tool.

Michael Buehler-Garcia, senior director of marketing for Calibre Design Solutions at Mentor Graphics noted that IP has become more complex across multiple nodes, not just at advanced nodes, and the number of checks has increased to include reliability checks.

Robertson added that depending on the type of IP, they can use the PERC tool for reliability verification especially as some foundries are now requiring these runs to validate there aren’t any ESD issues, for instance.

From the foundry perspective, Dan Kochpatcharin, deputy director of the IP portfolio marketing program in the design infrastructure marketing division of TSMC, pointed out that IP quality check has been the domain of custom tools and internal developed methodologies. “EDA tools have been focusing on integration and the signoff process of the SoC. However, we do start seeing changes in the offerings by EDA vendors. Customers now need to do more reuse, sharing, and licensing IP to maximize their resources, and achieve faster time to market.”

TSMC has been one of the leaders in this area with its TSMC 9000 IP quality program that it launched in 2000 and has created many home grown tools and utilities to ensure quality. This includes its TSMC9000 IP Tag specification along with a sanity check of the layout and SRAM.

He explained that this changed with the introduction of its Soft IP Alliance program. “We started working with Atrenta to develop Soft IP Quality, checklist, and implementation plan and signed up many IP Partners. Customers also are adopting our Soft IP program to use within their own environment.”

At the end of the day, Kochpatcharin wants to see more EDA vendors providing tool design especially to validate and improve IP Quality. “However, many EDA vendors see IP providers as customers, which makes it very difficult. The EDA providers forget that the majority of IP being re-used are internal to the customers. The key target for EDA tool provider should be their mutual customers. When customers see the tool was being used by IP providers to verify the IP, they would want to adopt too for internal use. This in turn will help drive adoption of their tools for chip-level verification.”



Leave a Reply


(Note: This name will be displayed publicly)