IP’s Growing Impact On Yield And Reliability

Managing IP quality and compatibility is becoming more difficult at advanced nodes and in safety-critical markets.

popularity

Chipmakers are finding it increasingly difficult to achieve first-pass silicon with design IP sourced internally and from different IP providers, and especially with configurable IP.

Utilizing poorly qualified IP and waiting for issues to appear during the design-to-verification phase just before tape-out can pose high risks for design houses and foundries alike in terms of cost and time to market. But managing the IP quality can assume different dimensions from simulations, DRC-LVS, IP traceability, conformance to standards, and configuration checks based on available formats. These issues are well understood, but as the amount of third-party IP in designs has grown, so have the number of potential interactions. As a result, quality has escalated to critical status.

“It has long been known how important IP quality and verification is, but the IP market has changed a bit,” said Randy Caplan, CEO of  Silicon Creations. “Now you’ve got outside IP coming into the big companies developing chips, so it’s quite critical that the chip companies have full confidence in the IP. We can’t have IP being a limiting factor in yield.”

On top of this, the types of applications and the use of IP in mission-critical and safety-critical applications is expanding.

“There are people claiming Level 4 autonomous driving,” Caplan said. “There are more aerospace and outer-space applications these days. The whole world is dependent on their 4G and soon 5G networks working reliably. We can’t even stock a grocery store anymore without all of our SerDes working at 100% reliability. Further, we’ve got the process nodes advancing. It’s always been known to analog designers that you have to run extreme checks and find where your circuits fail and use architectures that are robust to process variation. But under very aggressive schedules, always under pressure to deliver, it’s tempting to cut corners and [the industry at large] always has to push back against that.”

It also requires overcoming a level of complacency. The foundries have done such a good job ensuring that silicon matches the models that some designers are not as concerned as they should be. “We’ve been pushing hard against that, and in the last couple of years with advanced finFETs and the rate of change in processing accelerating, it’s become much more critical,” he said.

This becomes increasingly apparent at 7nm and 5nm, where thinner dielectrics make thorough characterization much more important than in the past. Noise of all sorts can disrupt signal integrity and create problems post-manufacturing. As a result, IP providers must maintain a wide range of advanced EDA tools for high-sigma variation analysis and measuring such things as electromigration and IR with self-heating because IC integrity needs to go hand-in-hand with IP quality.

“The design must be functionally correct, safe, trusted, and secure, but the area and the scope is much bigger than it used to be,” said Vladislav Palfy, director of application engineering at OneSpin Solutions. “We used to worry whether a design worked correctly, but now the focus is shifting to whether the design not only does what it should, but it does not do what it should not do. That has become a very big topic, and it’s all interlinked.”

A functional bug can lead to unsafe behavior in some applications. “This can be from a random bug in the field, and especially for 7nm and 5nm, the radiation is getting more influential there, which can also lead to a trust issue,” said Palfy. “What happens if someone puts any malicious code into your design? It doesn’t even have to be malicious. It can just be a designer error. We’re humans. We do stupid things or we get complacent. We forget something that can leave a back door to access some secure part of the chip and cause real damage. This is a much bigger problem than it has been in the past.”

At the same time, IP quality depends a lot on the perspective of the company working with the IP, whether that is the IP provider or the systems house.

“To an IP management company, the definition of IP quality varies a bit,” observed Ranjit Adhikary, vice president of marketing at ClioSoft. “For a systems house it is more about getting the design done as quickly as possible, which means they need reliable IPs that have been taped out. They want to know where it has been taped out, and they want to know about the experience the person had in integrating the IP. For this, we created a knowledge base of the IPs — what problems they had, etc. — to create a bridge between the IP developer and the IP user. If it’s a third-party IP provider, the same thing applies. They want to be the first to market. They want to provide the IPs at a good quality. The variation definition varies. Established houses like Synopsys and Cadence are very stringent on quality. Smaller IP providers, in contrast, may try to cut corners because they want to be the first to market and they want to show that they have the product. Those companies will release it in batches with what issues have been fixed, so traceability also becomes a big issue because any way you look at it, you need to know what issues have been fixed. If you’re using an IP subsystem, for example, you need to know hierarchically what issues are open in it, what have been fixed.”

For some companies IP quality comes down to whether the correct version of IP has been taped out. In other situations, such as in analog designs and mixed signal designs that are in use quite a bit, designers must be able to make sure the right IP is selected, which speaks to their ability to properly qualify the IP.

“If I’m downloading the IP, I may want to run scripts, for example, and validate whether it is running correctly or not,” Adhikary said. “If I’m doing an analog design, I want to make sure the collateral is there. I can see the verification test suite, which is there, and the results that I got. I should be able to link it from a dashboard for easy access to it. That gives you the confidence of the quality of the IP, and how detailed the documentation is, including associated documents that discuss the issues of the IP and/or how well it worked for someone else, and so on.”

Marc Greenberg, group director of product marketing for DDR, HBM, flash/storage, and MIPI IP at Cadence, agreed that the stakes with IP quality are getting higher—particularly at the most advanced process nodes.

“For the last several years, we’ve been producing advanced technology in most advanced process nodes, including SerDes and DDR5, LPDDR5, GDDR6, etc., and then trying to do it in 7nm and below 7nm. The stakes are very high, and they get higher at these very advanced nodes as the cost of respins go up. While we usually don’t cause respins, it’s a fact of life that respins happen sometimes in this industry. So we’re constantly being pushed to reduce the probability of having silicon problems. We’re always looking at ways of doing that using advanced tools. Along with this is the care and attention to make sure that we do everything that we can to prevent an error that would that would cause a respin.”

Managing IP
Increasingly, commercial or internal IP management systems are growing in use in order to help design teams track an increasingly complex array of details, particularly at advanced nodes where PDKs are regularly updated.

“Sometimes the model files will change or the electromigration rules will get more strict after the foundry gets more production data, so our designs are constantly splitting and sometimes merging and branching off,” said Caplan.

To account for this, IP providers have implemented design management systems to ensure that customers are getting the right version of IP. That isn’t just the latest version. It’s also the one from the same branch where they got their last version, because even if the design is perfect and all of the modern verification standards are followed, the foundry updates are out of the user’s control and they still have to make a design changes due to these reasons.

The same applies to tools, and it becomes particularly important in markets such as automotive. “Automotive safety requirements of a design accompany the tool qualifications kit, which also needs to be maintained and updated to the latest tool version,” said Palfy. “If there are any known issues, they have to be properly documented. The maintenance and the documentation is a lot of work.”

In order to meet automotive reliability standards for the first time, senior engineers sometimes work for several months to certify something according to automotive requirements. Re-certifying for a small design change or a process change can add several more months.

Process changes can be far more time-consuming because the flow can be certified, but not the actual tools, Palfy said. “You get a tool qualification kit that helps you certify your entire flow. But let’s say you want to add another tool to this flow. You have to do it all over.”

Another challenge for IP quality occurs when chips are composed in collaboration with partners, Adhikary said. “One company may build a base chip and then collaborate with different vendors, and each of them has a different modification that is needed. Tracking all of that together becomes a nightmare for them. We see that repeatedly from multiple customers. What sort of detailed documentation is available? How well did you integrate everything into the ecosystem? How do you build all that information into your own system and track this information, including certification documents, for example? If the chip is a partnership and the people who designed the chip leave the company, how do you make sure all of the data needed going forward is accounted for?”

As more of the semiconductor industry circles around the opportunities in the automotive ecosystem, these data points must all be captured to meet the requirements of the OEMs and Tier 1s. And that’s in addition to just following good design practices.

“There is some synergy with going through the ISO 9001 process for quality control that leads directly into ASIL-C site certification,” Caplan said. “These are all things you should be doing anyway. You should be documenting where each change came from, who did the changes, what the thought processes were behind the changes, and what is the risk assessment with each modification? These are good practices, and for healthy strong IP companies they should be following them, whether or not it’s going into automotive. We’ve found this is a good way to incentivize our team. It puts a real finish line there and a focus on what needs to be done. For management, it gives a backup to encourage the engineers to follow these important quality standards so that we all have the same goal of getting the chips into mass production reliably and safely. We’re all on the same team. It’s not about cutting corners or doing things faster. It’s about getting the chips into mass production.”

Additional support is coming from the automotive OEMs and Tier 1s themselves. “They are becoming increasingly aware of this, and we have customers who came to us because their supplier told them, you have to do this,” Palfy noted.

Foundries are another avenue pushing for IP management tools with an eye to quality. “The foundries are in fact pushing their customers to use IP management tools because it’s much easier for them due to PDK version changes, and documentation updates.”

What can go wrong?
Not having some type of IP management system to ensure quality can be disastrous. Though not widely discussed in detail, there are a number of recent examples, including one in which a systems company that had taped out a chip realized it used the wrong version of the I/O specification. One wrong step can derail an entire project, that error can involve methodology and design practices as well as technology.

Palfy noted that because of this increasing complexity from every side, his team has found bugs in designs that have been in the field for 15 years. “People just live with it and say, ‘It locks up every five years so we just replace it,’ but that’s a big cost. If the proper verification is done that can help. But there’s an added layer of complexity. Even if you document well, how do you know there is not anything in your design that should not be there?”

Caplan agreed. “We have IP that’s been in the field for 10 years, and this is a big debate in our company. How do we find that optimal point because we’re always learning new things, we’re always improving? Our latest designs are clearly better verified with more advanced tools now than our designs that have been out there. Also, you can ask how many designs have been in silicon for 10 years. We’re designing for 10- to 15-year reliability, so with each design we send out now we do an analysis and we document that in the flow and say, ‘We looked at this, we decided this, it is on a million wafers, it’s rock solid, we’re not touching it. It’s more risky to touch it than to not touch it so we leave it.’ That’s also documented in the automotive flow that we’ve analyzed this, and made an explicit decision not to touch it for ‘these’ reasons. In other cases we know when something is not going to pass 10-year electromigration. ‘We didn’t have self heating tools 8 years ago, so we’re going to fix a few things, here’s why, and here’s why it’s low risk.’”

Conclusion
Managing IP and IP quality is becoming more difficult and more critical as that IP is used in new markets and in more complex environments with tighter tolerances. As a result the burden on design teams is increasing, and the impact of decisions involving IP from multiple vendors is becoming more difficult to understand, track and, where necessary, fix.

Commercially available IP still has huge value for chipmakers. It is often state-of-the-art and can save time and headaches in the design process, but understanding all of the ramifications and staying current with all of the latest releases and potential interactions is a daunting task, even with the best of methodologies and tools. And it doesn’t appear to be getting any easier.

Related Stories
Why IP Quality Is So Difficult To Determine
How it is characterized, verified and used can have a big impact on reliability and compatibility in a design.
IP Requires System Context At 6/5/3nm
At each new process node, gates are free. That opens the door to a lot more IP blocks, and a lot of new challenges.
Intellectual Property Knowledge Center
More top stories, blogs, white papers and more on intellectual property in the semiconductor industry



Leave a Reply


(Note: This name will be displayed publicly)