Second of three parts: Defining IP; verification vs. implementation; power issues in re-use; causes for customers’ migraines.
Low-Power Engineering sat down to discuss issues in verification at 28nm and beyond with Frank Schirrmeister, director of product marketing for system-level solutions at Synopsys, Ran Avinun, marketing group director at Cadence, Prakash Narain, president and CEO of Real Intent, and Lauro Rizzatti, general manager of EVE-USA. What follows are excerpts of that conversation.
LPE: When we move to 28nm and beyond, will the companies leading the charge doing the same kind of IP integration that companies in the mainstream are doing, or are they doing their own IP.
Narain: There is going to be a lot of re-use of IP. But having said that, you don’t use IP just the way it is. There are always changes to be made. This is where it becomes difficult. You need to verify everything. If you had predesigned pieces and you put the same thing in every time, the verification problem would be much simpler. There are always customizations. But there has to be a lot of IP re-use or you would never be able to build designs with the number of gates we have now.
Avinun: There is IP re-use. The question is how much this helps them in the verification when it comes to hardware and software. I just met with a customer. Their work is brutal. They have a design cycle of six to nine months. If you’re a midsize company and you miss three months in your product cycle, it could kill the whole company. This is the pressure they have. From their point of view, they go to their customer and their customer says, ‘You need an ASIC at this cost and with these features. If you don’t have these features I may go to your competitor. If you deliver all of this on time, then maybe I will choose it.’ On one side they need to re-use the same IP, but they also have to differentiate themselves from their competitors. One of the problems they have is the software component. They used to know that this problem is related to this software model. Today they’re using three different processors from three different companies with 12 processor cores. There is no way to know how to relate a problem to the root cause. The way to solve this is re-use and improving the verification environment. They’re improving the debug.
LPE: But rather than just killing one company, failure to verify properly can kill multiple companies, right?
Avinun: That’s correct.
Schirrmeister: And if you don’t verify properly, it can kill people. You have the mean flight attendant staring at you when you’re on your cell phone. It may not actually hurt anything, but that case has not been verified across all cell phones so you have to restrict things. But without IP, it would be impossible to design these devices in the first place. Standard IP will become more of a combination of blocks of IP. At that complexity level, you have to rely on individual subsystems to be verifiable, verified, and not interacting with each other. That leads to bigger components. Tensilica is licensing complete audio subsystems. It’s a combination of hardware and software in a pre-arranged and pre-verified fashion. The verification moves up. You have more components that you integrate at a higher level.
LPE: If IP doesn’t get re-used the same way, though, how does that affect verification?
Schirrmeister: If you have to make changes to IP every time you want to re-use it, it’s not IP. What you’re seeing a lot in the physical IP world is you pre-validate the physical side. You can’t touch that because it’s hardened. For star IP, you don’t want to modify it.
Narain: The key is the delta. If you’re changing 50% it’s not IP. If the delta is 3%, it’s still IP. But chips fail not just because of functional verification. They can fail because the implementation was faulty or power management was bad. These are not functional issues. They involve methodology. When you’re taping out a 100 million-gate design in nine months, the pressure on things rules checking is enormous. With issues like power and CDC, you need to make sure your timing constraints are correct. I just talked with someone who taped out a chip. He said, ‘I can’t certify all the timing constraints will work for every scenario, but this is the tenth rev of the IP and I assume it will work.’ But one of his chips did break down because of bad timing constraints. With a functional verification problem you are trying to contain that with a re-use methodology. With implementation every chip is a totally new thing. It’s as fraught with risk as functional verification.
Avinun: But when I talk to customers I ask them where they spend most of their time and where they want to shrink the cycle. It’s still verification. When it comes to IP, it depends on which IP you’re looking at. If you just need to interface with PCI Express, the verification IP is taking care of a lot of this functionality. You need to advance yourself. We have people sitting on committees for standards that will available in one or two years because it takes a year or two to develop verification IPs. But you’re developing a sensor in camera, you probably will need to innovate this with every new design or you will not stay in business. If this is your main business, you’d better innovate. They may be able to re-use components of this core IP, but they have to change the IP, too.
Rizzatti: I’ve never heard anyone say they would take IP off the shelf and use it as plug-and-play. It never works. Never.
Schirrmeister: That’s why the qualification aspect is so important.
Rizzatti: There was a comment before about emulation. What’s changed is that in the past it was used by specific segments of industry. Today it’s being used across the board. We’re also seeing high-level testbenches being used with these. Five years ago this would have been the exception. Now it’s mainstream.
Avinun: Part of this is related to bring-up time of acceleration and emulation. It used to be an art to make it work.
Schirrmeister: Yes, I used an M250 in 1994 for multimedia. It was horrible. That has improved. But going back to the subject of IP, a lot of the effort we see is verifying how the system works together. Today you don’t verify the smaller components like multipliers and adders. If the curve continues, you can always revalidate the bigger components. It’s now about figuring out the higher level. Selling emulation is like selling insurance. If you don’t run enough test patterns your design may fail. I’ve seen risk curves. When do you have enough confidence to go to tapeout? This is augmented by people separating the pure functional verification—running all the test cases of a demodulation algorithm—and instead of doing this on dedicated hardware they’ve replaced that with a processor. Now you’re really more interested in the structural components of the processor and whether it connects to its environment correctly. Then you do more of the actual functional verification in software.
Narain: You started out by saying verification is an unbounded problem. It’s actually a large number of problems, and the more problems we can move from unbounded to bounded the more efficient verification becomes. You’re putting more and more into bounded pieces. That’s where methodology can help. It used to be that clock domain crossing could be modeled in simulation and we would just run it. Today it’s a very small part of the CDC (clock domain crossing). It works independently and it runs in parallel.
Leave a Reply