How it is characterized, verified and used can have a big impact on reliability and compatibility in a design.
Differentiating good IP from mediocre or bad IP is getting more difficult, in part because it depends up on how and where it is used and in part because even the best IP may work better in one system than another—even in chips developed by the same vendor.
This has been one of the challenges with IP over the years. In many cases, IP is poorly characterized, regardless of whether that IP was commercially or internally developed by a chipmaker. But as chips become more complex, subject to more interactions from multiple power domains and use cases, even the best intentions to characterize IP can go awry.
The IP itself is getting more complicated, as well. What was once used for a single function now has been combined with other IP to create subsystems. And while that provides benefits in terms of development time, it also raises new issues involving integration. So the first order of business in considering any piece of IP is whether it the risk of including it is worth it.
So what exactly is considered acceptable when it comes to IP? Arm, one of the largest makers of commercial IP, measures quality using many metrics, and what constitutes a “pass” varies at different points in the project, according to Peter Greenhalgh, vice president of technology and an Arm fellow. “However, our metrics and ‘pass’ requirements are broadly consistent across all of our IP. Our quality metrics are tied to functional verification, performance and deliverables. Each one of these areas breaks down in to overlapping test sets that evaluate parts of the IP.”
For a CPU, functional verification will include architectural testing, in which the IP is compared against an architectural model, as well as directed testing, block and unit level constrained random testing, where it is compared against a microarchitectural model of that block/unit. There also are several formal verification methodologies used, along with OS booting, application testing on emulator and FPGA and multiple top-level random instruction sequence engines, which are run on an emulator and silicon when and where it is available. In short, the goal is to have many overlapping functional verification methodologies, as that reduces the chances a bug can escape.
Performance verification takes a similar path, with multiple different benchmarks and micro-tests that are run on the IP and compared with predicted performance on our models, which also include power, frequency and area for the design, said Greenhalgh. “Deliverables—including integration testbenches, IP-XACT models and vectors to aid with chip-level power analysis—must be created and tested in different IP configurations to ensure the IP can be integrated effectively in the partner’s SoC. Each IP we produce has several milestones in its life, from the traditional alpha and beta, through what we call ‘lead access,’ which is suitable for test silicon, through to ‘Release,’ which is where all testing had completed. And we qualify the product as suitable for full production. What we consider a ‘pass’ for the functional, performance and deliverable quality metrics increases as the IP progresses through each milestone on its way to ‘release.'”
From the IP user perspective, quality is the deciding factor as to whether a piece of IP will be adopted in the bigger design, because it saves the designers’ time. That time can be used for the main functions of the project where they can really add unique value.
“The definition of quality is quite wide in this case,” said Zibi Zalewski, general manager of the hardware division at Aldec. “It indicates the IP is not only operating correctly and efficiently, according to industry standards or project requirements, but also that it has the ability to easily interconnect with other parts of the system and/or scale it to multiple units when needed.”
On the IP provider side, quality translates into providing all of the above capabilities. That requires complex verification methodologies, using standardized interfaces, comprehensive documentation, and simply stating the user perspective experience. Some IP always needs to be integrated, but IP quality determines how fast this stage can be finished, Zalewski noted.
Another quality metric for IP is availability of a comprehensive testbench suite that covers all functions and all configurations, possibly ready to integrate in the larger UVM design environment. “Here, a good practice denoting IP quality would be bundling emulation/prototyping ready packages for seamless implementation in FPGA in pre-silicon testing,” he said.
Depends who is asking
Most of the time, the definition of IP quality depends on the vantage point. “If you are an R&D manager, IP quality means something,” said Tom Wong, director of marketing for design IP at Cadence. “If you are a global supply manager, IP quality means something else. If you are an SoC start-up, your measure of quality is quite different from that of an established fabless company. If you are designing IP in-house, then your considerations are very different than being a commercial IP vendor. If you are designing an automotive SoC, then we are in a totally different category. How about as an IP vendor? How do you articulate IP quality metrics to your customers?”
This varies greatly by the type of IP, as well. “When it comes to interface (hard) IP and controllers, if you are a R&D manager, your goal is to design IP that meets the IP specifications and PPA (power, performance and area) targets,” Wong said. “You need to validate your design via silicon test chips. This applies to all hard PHYs, which must be mapped to a particular foundry process. For controllers that are in RTL form—we called these soft IP—you have to synthesize them into a particular target library in a particular foundry process in order to realize them in a physical form suitable for SoC integration. Of course, your design will need to go through a series of design validation steps via simulation, design verification and passing the necessary DRC checks, etc. In addition, you want to see the test silicon in various process corners to ensure the IP is robust and will perform well under normal process variations in the production wafers.”
For someone in IP procurement, the measure of quality will be based on the maturity of the IP. This involves the number of designs that have been taped out using this IP and the history of bug reports and subsequent fixes. “You will be looking for quality of the documentation and the technical deliverables. You will also benchmark the supplier’s standard operating procedures for bug reporting and technical support, as well as meeting delivery performance in prior programs. This is in addition to the technical teams doing their technical diligence,” Wong pointed out.
An in-house team that is likely to design IP for a particular SoC project will be using an established design flow and will have legacy knowledge of last generation’s IP. They may be required to design the IP with some reusability in mind for future programs. However, such reusability requirements will not need to be as stringent and as broad as those of commercial IP vendors because there are likely to be established metrics and procedures in place to follow as part of the design team’s standard operating procedures. Many times, new development based on a prior design that has been proven in use will be started, given this stable starting point. All of these criteria help the team achieve a quality outcome more easily.
Then, if designing for an automotive SoC, additional heavy lifting is required. “Aside from ensuring that the IP meets the specifications of the protocol standards and passes the compliance testing, you also must pay attention to meeting functional safety requirements. This means adherence to ISO 26262 requirements and subsequently achieving ASIL certification. Oftentimes, even for IP, you must perform some AEC-Q100-related tests that are relevant to IP, such as ESD, LU and HTOL,” Wong said.
This is a big change because IP quality used to be a localized metric, said Mike Gianfagna, vice president of marketing at eSilicon. “Did the IP have good documentation? Was it fully tested? Were good verification vectors and a detailed silicon report included? While those items are still very important, the pervasive use of third-party IP from multiple sources has highlighted another quality metric—interoperability and integration.”
The IP used in a design needs to work be compatible, in addition to a range of quality metrics on a standalone basis. That includes a checklist of items such as a compatible metal stack, design for testability, and operating range. The reliability requirements and control interfaces play an important part in whether the design team will face integration challenges, which may include how to stitch IP blocks together, Gianfagna said.
Quality matching
It helps to match the quality of other IP being used by a chipmaker, which is particularly important in markets such as automotive. “It’s very easy for customers to say, ‘zero defects’ but when you work in realities, you know that all IP, all code, has bugs. The key is to minimize that much as you as you can,” said Mick Posner, director of marketing for IP accelerated at Synopsys.
One of the ways to approach quality is with specific configurations, which allow for very specific testing and validation of a set configuration, because configurable IP usually has states that are not covered because it’s not configured in that mode. This can leave the user with the impression that the quality is not good. “By moving to a configured subsystem, fewer holes are exposed. Users want to see they’re lint free, and CDC/RDC error free. With verification, you’re up in the 99.x% functional coverage code covered, and when it extends to mixed signal IP, characterization test chip reports are absolutely required.”
Designing for the automotive sector requires considerably more verification, along with functional failure analysis, both of which are specific to a configuration, Posner said. “Automotive IP products go through the standard ASIL testing, FMEDA reports, functional analysis, third-party automotive certification of a block, as well as reliability data such as the suspected failure rate, and physical verification of the working conditions.”
IP quality includes RTL
IP quality stretches all the way to the RTL code.
“Customers want to make sure they can read RTL easily, that it can be easily followed, that it is well commented, that it synthesizes without warnings or errors, and that it is well-structured Verilog,” said Chris Jones, vice president of marketing at Codasip. “They also want support for various low power modes.”
This is particularly important for the RISC-V ISA, where quality concerns are popping up given that every physical implementation of RISC-V ISA is different. “RISC-V is merely an ISA spec, and it’s up to each provider to implement that ISA spec as they see fit,” Jones said. “The quality of RTL deliverables varies widely within the RISC-V community. Part of that is expected because part of these are academic contributed cores with limited support. But even a recently announced core that the developer [a fabless semiconductor company] patted itself on the back for open sourcing has a bug in the AXI bus.”
Quality is particularly important in the RISC-V world, where developers want to make sure that they’re getting IP from suppliers that have done their homework and have delivered something well, he added.
With so many variables to contend with, it has been suggested that a standard for IP quality might make sense. To others, it doesn’t.
Ranjit Adhikary, vice president of marketing at ClioSoft, doesn’t believe that standards work in the case of IP quality because every company is different when it comes to requirements. “If there is an IP repository, people forget the snapshot of the IP design that was used for tapeout or for the test chip. Generally, what happens when people have to copy the data and sell an IP is they copy the wrong file to the wrong snapshot. Also, when an IP is downloaded, checks and balances must be in place. For example, when you’re uploading an IP, you want to make sure there’s no noise in the repository. You want to make sure you’ve uploaded the correct version of the IP.”
Having an interface to a data management system helps because then it is tied directly to the snapshot, which then can be uploaded into the repository or used to keep building the design.
“When you do that, you want to be certain that all the relevant files have been uploaded, and which can be done by defining the workflow,” Adhikary said. “Let’s say, for example, you’re the engineer that’s doing it. Then it comes to me, and I say I’m going to run some scripts and see if what you’ve taped out with is the correct version through reproduceable results. Once you put these checks and balances in, which can be made automatic with scripts, you can approve the upload. The mantra is when you want an IP, you want to the ability to find an IP easily, qualify it, re-use it, and publish it. When you have found an IP you want to make sure it meets your requirements, so you need to have the ability to define the metrics you want, the parameters you want, identify the IPs, compare the IPs to one or two shortlisted ones you want to look into. Once you do that, let’s say you get approved for a download, you can run lint checkers, for example, which will qualify and tell you the status of the IP, if any files are missing, what the IP is capable of, and what is actually being generated, is it the same or different. Those are things you can do, and there’s no shortcut.”
Overloaded terminology
Just the term “quality” is overloaded due to associations with “Six Sigma” and other specific industry initiatives, suggested Tom Anderson, technical marketing consultant at OneSpin Solutions. “The term ‘IP integrity’ is broader in scope.”
Assuring the integrity of a design encompasses four critical dimensions—functional correctness, safety, security and trust. Functional correctness is the focus of traditional verification, ensuring that the design meets its functional specification. In the case of IP, this specification often involves a standard such as the USB 3.0 interface or the RISC-V instruction set architecture (ISA).
But functional correctness alone isn’t sufficient for many designs. “Safety-critical applications, such as mil-aero, embedded medical devices and self-driving cars, require that designs operate correctly in the field,” said Anderson. “Random errors such as alpha particle hits must not compromise design safety. Many types of IP are used for these applications, so the providers must account for safety, and the IP integrators must confirm this. In many of these same applications, the IP must not contain security vulnerabilities that could allow malicious actors to take control of chips containing the IP in the field. Both IP providers and IP integrators must screen designs for any accidental security holes.”
It is also critical to trust that IP blocks implement only what the functional specification defines, not what it isn’t supposed to do.
For customers to feel an IP is low risk, there also are absolute deliverables that must be met, including the monetary investment, according to Farzad Zarrinfar, managing director of the IP Division at Mentor, a Siemens Business. “One of the things that we had in mind when we designed our IP infrastructure was a cloud-based computing system that allows us to do comprehensive IP verification for IP, which doesn’t expose customers to huge costs, so CapEx is dramatically reduced. Everything is in the cloud. We verify IP down to the transistor level using a variety of tools, like Calibre for a variety of antenna effects, ERC effects, LVS, DRC. Those are the kinds of things that if you don’t verify, you could run into problems and deliver a low-quality IP.” Comprehensive parasitic extraction, with all resistance parameters considered, along with transistor level simulation and spice level simulation of the design is also performed.
Other user expectations include comprehensive timing analysis, power analysis, leakage analysis to generate deliverables, front end views, and back end views. Designs also should be verified at different levels of abstraction to make sure it is still equivalent, and there needs to be comprehensive floating node checking. In addition, as voltages are decreased in designs, variability of the CMOS technology increases, which demands further Monte Carlo analysis. Here, electromigration and IR analysis come to the game.
“All of the data sheets that we generate, all the Verilog models that we generate, all the DEF timing, all the LEF for floor planning, all the BiST models, all the way to GDS-II—everything must be accurate,” Zarrinfar said. “On top of that, in selected process nodes, test chips are built and combined with generated data in order to correlate with silicon data to make sure that they are aligned with each other.”
Conclusion
Each customer is responsible for making sure their design makes it through manufacturing successfully, but the amount of supporting documentation, analysis, verification, validation, and simulation needed for the various pieces of IP within that design is mind boggling. Viewed from the ecosystem perspective, today’s SoCs with their high IP content are truly a team effort.
In such a fast-moving and complex world, there’s no conceivable way to create a single standard that would actually mean anything because the number of techniques and methodologies would need to be agreed right down to quite low levels, and vary for different IP, according to Arm’s Greenhalgh. This would be nearly impossible to achieve. And if a standard stayed at a high-level of abstraction, then the room for subjective interpretation would render the standard largely meaningless.
As such, IP quality will remain a moving target, varying by customer, by design, and by application. And that is not likely to change anytime soon.
Leave a Reply