Commoditizing Constraints

Can a vendor differentiate itself by the quality of its constraint solver? The answer may be different between SystemVerilog and Portable Stimulus.

popularity

Preparing articles for Semiconductor Engineering involves talking to a lot of people and then trying to fit their statements together in a way that is logical and fair. Sometimes a subject will come up in one of these calls that is not really on topic, but is still interesting. One such incident happened this week while doing research for the Verification 3.0 article.

The topic was constraint solvers. These are part of a testbench and guide the generation of vector sets that are used for simulation. Mark Olen, product marketing group manager for Mentor, A Siemens Business, said that in the early days, it was not uncommon to hear users report that roughly the same amount of time used to be spent in the constraint solver as in the simulator itself. This represented a 50% overhead. He was happy to report that a lot of progress has been made since then and is now often in the 10% range.

He also said that “SystemVerilog constraint solvers are reasonably commoditized. They are bundled into the simulators and nobody pays extra for them. I can’t remember the last time we did a simulation benchmark using a constrained random environment. Every benchmark is a set of long directed tests and that is how simulators get benchmarked.”

Harry Foster, chief scientist for Mentor, added “You will find less and less differentiation over time in terms of constraint solvers. We all get them from the same place and in the past, differentiators were associated with the testcases you were running and tuning against. That is why you find less differentiation over time as each constraint solver sees more and more cases, they get tuned until they become a commodity.”

They are bold statements, so I had a quick chat with Adam Sherer, product management group director in the System & Verification Group at Cadence Design Systems, to see if he agreed with these statements. “Users do care about the quality of results when they are cranking every server 24/7/365 in their regression farms and they are not making forward progress. They add more tests and do not seem to get further along in coverage and at that point they do look at quality of results.”

Sherer believes that there is still room for solvers to differentiate themselves. He also doesn’t completely buy the notion that they all come from the same place. “University engines focus on the capability of the underlying solvers and they will generate BDD engines, the SAT engines and try and find efficiencies in data storage and execution – which are great forms of research. What they don’t have are massive library of designs, where each sub-class of designs have their own unique solution space and constraint sets. A good commercial engine has to face all of those.”

So, while he doesn’t agree, he also doesn’t disagree, both companies saying that if you see enough designs, then your solver can get better, but there is presumably a limit on the improvements until you start to performance level out.

Another reason why testcase overhead may have come down is user education. In the early days of SystemVerilog, there was little guidance about how to construct good constraint sets. Today, all vendors supply collateral, white papers and other forms of guidance.

What both vendors totally agree upon is that Portable Stimulus resets everything. New constraint solvers will be required and new approaches to solving the problem are needed. “At the SoC level the state space that we are verifying is irrationally large,” says Sherer. “Investing in constraint engines and making sure that we have solutions that get us to good verification, because that is the ultimate goal, is a big area of investment and focus.”

It may be a long time before Portable Stimulus solvers become a commodity. No vendor has huge libraries of complete SoC designs at their fingertips to use for optimization and some of the engines may be different from those used in formal verification engines, meaning that every vendor has to start from a somewhat clean sheet of paper. That probably means that the vendors will be seeing more benchmarks being conducted on the quality of the solver.

Good luck to everybody – users and vendors alike.



2 comments

Tudor Timi says:

My recent experience after adding a lot more controllability to the random stimulus of a (probably) medium size TB has shown that about 90% of the time is spent solving constraints. If you’re just randomizing items independently of each other, each interface for itself, then you’re going to probably reach the 10% stated in the article, but it’s going to be very difficult to properly steer your stimulus in the directions you want. You then end up in the situation Adam Sherer described, of adding more tests and barely getting a bit more coverage.

I haven’t used used Portable Stimulus yet, but I truly hope that it will help in this regard, by removing a lot of the “run and hope” factor that is a big part of constrained-random verification.

Brian Bailey says:

Thanks for providing the data point Tudor. It is always great to hear from real users about their experiences.

Leave a Reply


(Note: This name will be displayed publicly)