Biggest Verification Mistakes

With SoCs containing more processors and more software than ever before, it’s not just more verification that’s needed – it’s that the right verification is needed.

popularity

SoCs today have more processors and more embedded software than ever, including drivers and middleware just to get the hardware working. This, in turn, requires more and better verification. Add to the challenge the fact that there is no one way to do verification and it is easy to comprehend how critical it is to for hardware and software teams to embrace verification processes and methodologies from the earliest stages in the design process.

“Many companies underestimate what it takes to verify a chip,” observed Michael Sanie, senior director of verification marketing at Synopsys. “If you look at the way semiconductor design has grown, the verification part caught people by surprise. And it’s not everyone. Some companies figured out as early as 10 or 15 years ago that verification could be a strategic advantage. If you know how to do it well, you can invest in it and build infrastructure that gets you a business advantage. But not many companies have actually done that. I’ve seen the level of thinking of verification as a strategy for strategic advantage within maybe a handful of companies. The biggest driver behind that is time to market because people have realized time to market and getting chips out on time definitely gives an advantage.”

But verification also occurs largely at the end of the design cycle, and that affects its strategic value to a chipmaker. “You get rushed because of time, you don’t do enough because you’re rushed to get something out, or you create delays in the schedule because you want to do more verification,” said Sanie. “It’s always the bottleneck. Good verification doesn’t start after design. It should start as you are designing and architecting your design environment. The team should be thinking about verification at that point, not waiting until design is over.”

Another mistake people make is looking at tools and not the verification process itself, noted Harry Foster, chief verification scientist at Mentor Graphics. “It is tools and technologies, but it is also a set of steps we put in place to make it repeatable. It’s developing the appropriate skills within the organization to take advantage of the technology — that’s a problem a lot of organizations don’t address. Then, on top of that, it’s building metrics into the process so they can have visibility to determine if things are working or not. A lot of people want the perfect hammer yet they’re not stepping back to determine if they even need a hammer.”

Engineering teams also pay too much attention to the how aspect of verification versus the what, he said. “I see that over and over again. In other words, somebody will focus on how they are going to create a UVM testbench and they don’t sit back and ask what they really need to verify. They missed that whole aspect. That comes down to the number one problem is this – it’s poor verification planning. It’s that simple.”

Like attempting to fly a 747 when all you needed was a Volkswagen, the same holds true with verification, Foster said. “It is poor verification planning and lack of experience in understanding what is the best tool, the best technology, the best methodology for the class of design they are trying to verify.”

But this requires a specific skill set to be able to step back, look at the design and understand the how and what is needed, he said. “An organization needs a couple of things. They need a verification architect, and then they need the implementers — the folks who are putting together the plumbing, the infrastructure in the tools. A lot of organizations fail to put that in place. The verification architect is experienced in the sense that they can look at the problem and understand what they need to verify, and not only what they need to verify but which type of tools and technologies would be appropriate or inappropriate to verify. That does require some expertise. You need somebody who has more of an architectural view certainly understanding design. A lot of organizations fail to recognize that so they get a bunch of verification people and the lead, where the lead is just managing, and they don’t really have anybody who has that architectural perspective. There is a lot of skill there. You’ve got to deeply understand the design but you also need to understand what type of approaches are appropriate for the type of technology,” he explained. This includes knowing what you are trying to accomplish from a verification perspective and how to measure that goal.

Don’t just throw resources at the problem
The ‘right’ verification path is fraught with complexities, and throwing resources at a problem is not always the best solution.

While a top tier semiconductor company may have more resources to dedicate to a verification challenge, this is not always the best case. Charlie Kahle, CTO at design services provider Synapse Design, observed that for a big company such as Qualcomm with its modem chips, “That’s the enterprise that they own and are going to continue to own, and they are going to continue to develop around that. So they are going to put a lot of emphasis and a lot of money, therefore, at that part of the problem. In that case money and the size of the company go hand in hand and they will do everything — throw everything but the kitchen sink essentially at it — they will do as much as they can. But then you come back to the big companies where maybe it’s not the mainstream part. Maybe it’s a division of a big company, but they are not the mainstream part of the company. In that case they probably won’t be throwing as much at it and will be much more cost-conscious. They will look at where it makes sense. Maybe they just do those couple of blocks in the FPGA that are new and don’t go any further with that on the FPGA, and the rest is done with UVM.”

, president and CEO of Synapse Design noted they’ve seen many companies that take a traditional approach to verification and have been doing it that way for a long time. Often this approach includes throwing too much manpower, too many resources or buying too many tools at the matter but still not reducing the risk factor. Instead, “We’d say, ‘Let’s look at it outside the box. Let’s look at what design you are doing. Let’s take it down into what makes sense to do an FPGA, what makes sense to do different categories.’”

Synapse has seen less-than-ideal verification situations even among the top 10 semiconductor companies, he said. “We have gone in and looked at their verification environment and told them how many holes they had and gone through the postmortem studies to determine why it did not work on the first try, why it did not work on the second try, how the bug went through and their fundamental flaws on designing the verification environment. It’s not just throwing bodies at the problem. Even though they threw money at the problem, they still did not structure it. It was not designed correctly in the first place. At the end of the day, if you want to get the chip right with the least amount of effort, it’s all about ROI. That’s why the verification setup needs to be completely different based on what you are trying to do.”

Along these lines, Cadence Fellow Mike Stellfox pointed out that a key part of verification planning is estimating the verification task properly. But this is still a bit of an art form. “This is one area that we see people struggle to put together a comprehensive verification plan that they can use to really track their progress and determine when they’re done because verification is all about risk management. You can’t ever completely verify something. You have to do your best job to mitigate the risk, and the best way to do that — like anything else — is capturing a plan and managing your progress toward that through metrics.”

Another area of concern for Stellfox is related to the tremendous focus on UVM and constrained random coverage driven verification, which has become the mainstream approach that tool vendors have been working on with customers for the last two years. “That kind of approach is really built for what I call bottom-up verification for IP, and maybe subsystem verification, where you are trying to verify the piece of hardware independent of any kind of specific SoC or application context.”

“In that kind of verification,” he continued, “you really want to try to exhaustively verify because you don’t know exactly how it’s going to be used. Those IPs could be sitting in many different chips. That’s applying well, but where I see a lot of mistakes are when you look at an SoC with the software, there hasn’t been a lot of focus there and everybody’s just doing ad hoc approaches for the SoC integration and use-case verification where you want to verify how all of those IP’s work together in the context of some specific application context.”

Tom Anderson, vice president of marketing at Breker, agreed. “The biggest issue we see with SoCs is people assuming just because they’ve done a good job verifying the individual blocks, you just sort of plug them together and expect them to work. We had an early customer that had a big SoC – a processor and a whole bunch of different blocks – and they had actually done a pretty good job of verifying that the blocks were integrated correctly. What they never did was stream the blocks together into what we call a user scenario. This customer had a scenario where they had two blocks that were exchanging data, they were talking to each other on the bus. They’d both been verified independently but they never connected them together in a realistic simulation. They were backwards, so they had to have the first block write the data into memory, run a software algorithm to flip all the bits, and have the data go out to the second one. It was a very expensive software patch and it killed the performance in that particular operation.”

Given the amount of embedded software in SoCs today, along with the need to verify user scenarios, all signs point to top-down, software-driven verification technologies as opposed to the bottom-up approaches that are prevalent today. How quickly those technologies become available for users and how fast they will be adopted remains to be seen.

Harry Foster’s top recommendations for verification:

  • The engineering team must lock down the API between the hardware and software. “Failure to do that creates most of the problems we experience today. If we can agree on that, the teams can separate and go off and do their designs. As long as they adhere to the API – the communication between the hardware and software – the teams can then go off and independently do design and do verification of their own components knowing that they will work because they haven’t violated the rules that had been established for the way hardware and software will communicate through the API.”
  • Do not just get the hardware, get the software, get it all together, throw it in there and go, then start verifying with trying to boot the OS or high-level function. Instead, check things incrementally. “I’ve got the hardware and the software: can the software talk to the appropriate aspects of the hardware? Partition the verification into the appropriate stages or phases with the goal of determining what to verify at a particular phase or stage.

 



Leave a Reply


(Note: This name will be displayed publicly)