Mixing It Up

Verification engines are being combined for better coverage; SoCs force changes based on time-to-market issues.

popularity

By Ann Steffora Mutschler
To enable the next level of productivity in the verification space, certain tools need to be combined and integrated in a very meaningful way.

The concept is far from new. This happened on the RTL to GDS front between synthesis and place and route. The tools work very closely and there is bi-directional collaboration. It also happened in the functional verification space six to seven years ago with the emergence of SystemVerilog and the creation of testbench tools that combined the requirements for coverage, assertions, design and test bench, explained Michael Sanie, director of verification product marketing at Synopsys.

What runs the show is the SoC. “How SoC is primarily different, besides the flows and everything else, is that SoCs have a very tight market window,” Sanie said. “The schedule is king. The schedule makes pretty much all of the decisions. Before it was cost—not that cost is no longer important— but now schedule makes a lot of the decisions. They know they’re going to be late if they are not sure or they can’t foresee the predictability of the timeline. Then they will spend more money and buy more tools. To them it’s more important to hit the time and make it predictable,” he said.

With this change and the focus on time-to-market, engineering teams are using a lot more tools such as acceleration, emulation, FPGA-based prototyping—very diverse and completely different engines. And because of the time pressure, many engineering groups are looking for a solution that really improves on time to market, Sanie observed.

Although simulation is still the bread-and-butter for verification—with 90% of bugs found in simulation—acceleration, emulation and prototyping also are being used, and the bring-up time for each of these methods is not trivial.

As it turns out a lot of the work is repeated. “When running simulation, you do certain analysis before you run simulation that could then be applied at some level to emulation, etc., so we are creating a stronger integration long-term, working with industry leaders on R&D-level collaboration to create this vision. It takes time, but that is the direction the industry is going,” he said. The approach will include a common debug foundational technology.

Steve Brown, director of product management at Cadence, agreed that automation on top of the verification engines is of critical import.

In addition to the technology challenges, in order to meet the incredibly short schedules, several of the “big boys with the big problems” are hiring engineering staff to perform tasks in parallel, which includes using these different engines at the most opportune time possible. However, this bumps up against pressures on the economics of all that staff, he said. “Buying the engines is one thing, but then having to hire specialists and have everybody trained and ramped up on the projects and choreographed with the project schedule is a real challenge.”

The big issue is those engineers have to do a lot of things by hand. “Some engineer somewhere who is designing the RTL will try to send pointers to the most current files, and then some poor guy has to pull it all together and try to make it work. That’s just from an RTL perspective. Then you bring in software and you’ve got two different species trying to communicate,” Brown added.

What’s new today is that there are opportunities to automate the manual work, he said. “We have an example that takes a description of part of the design and automates the work of creating and connecting a testbench of the design. You can configure the testbench to do a couple of things. You can do verification, which is pretty straightforward. You might think about creating a UVM testbench, but it’s automated.”

Another use case automates performance analysis. “Here you have a design and some data about part of the design, such as the interconnect. The interconnect is getting really complex to the point where it actually is as big as any other IP on an SoC, and it’s also the linchpin for the whole SoC. All the traffic has to go through it, and if there is congestion you may have a wonderful idea to have a quad-core processing system, but if you can’t pump all the bits through the interconnect then it didn’t really matter,” Brown explained. “People have to look at the interconnect and the system really carefully. Virtual platforms were useful for looking at the stack of software with the peripherals but this new domain of interconnect traffic congestion and all the other tricks that the interconnect IP providers are doing to help configure and interconnect to meet the particular traffic requirements of an SoC—this all has to be tested very carefully.

If it isn’t tested carefully, the call on a phone will drop or some other function will fail. The software stack will fail in some corner case because the system wasn’t tested under all the different traffic loads.

Already intertwined
On the RT level verification side, which includes structural analysis, formal analysis and simulation (either hardware-accelerated or software), these engines already are intertwined, said Pranav Ashar, chief technology officer at Real Intent.

One benefit of this is that it lends itself to a layered approach in verification. “Chips today are complex enough that you can’t do verification in one shot; you’ve got to do it through a number of steps, and the structural-formal simulation type of characterization for the verification engines lends itself to a layered approach. You do the structural first, do the easy checks for fast turnaround and maybe the designer uses it, and it hardens your RTL,” he said. Formal verification and simulation follow.

Second, the close ties between verification engines lend to a more symbiotic approach and there are different ways in which this happens. One way is that the structural analysis sets up the formal problem to solve, Ashar explained. “What we have seen so far in the formal community (the assertion based verification community) is that formal is being put out there as a technology looking for an application. What the structural analysis does is analyze your RTL in the context of the specific problem you are solving. It generates an implicit verification specification that is part of your RTL. It also scopes the problem for formal verification so it doesn’t get out of hand.”

Finally, he said, you really can’t do justice to some verification problems today, and in those cases the mixing of these engines is mandated.

Lots at stake
Brown noted that automation of the analysis is unprecedented. It’s not that engineering teams could not do this analysis before from a technical perspective, but it is that they didn’t have the time or the resources to sit down and create the testbench in as many permutations as they require.

In essence, what is being addressed is the human productivity barrier which has been exacerbated by the complexity of the SoC, the dependency of the software stack, creating all the corner cases, the substantial growth in the size of teams and the shortening of the schedule. “You put all that together and we are going to get these kinds of problems. We are still seeing incredibly expensive programs canceled because they didn’t architect for these corner cases of traffic. I just heard today of a major SoC supplier that spent all of last year working on a chip and then they eventually canceled it because they couldn’t close—not on verification but on performance. These things are happening all over the place. They are the dirty laundry that people don’t talk about,” he said.



Leave a Reply


(Note: This name will be displayed publicly)