What’s wrong with my car, and why is that relevant to semiconductor design?
I recently had to take my car to the dealership because the gas-saving “auto-shut-off-while-stopped” feature wasn’t working. The dealer explained that the reason it took two days to debug was because it “touched on many systems” in the car. In the end, they realized the battery wasn’t fully charged and blamed it on my short 10-mile commute.
Whether that was an honest answer or an example of low-power design gone bad, can be discussed over a beer. The more interesting point is how complicated the modern car has now become, and how difficult it is to find a faulty part – in an otherwise known-good system. Now imagine if they’re trying to debug the same system, and many parts thought to be good are failing due to hardware and/or software design errors. Oh, and your debugging on a system running 1-millionth of real-time speed. Now you know what it’s like to be an SoC verification engineer…
Looking back at my rapidly-depreciating car, we start to understand why mechanics are referred to as “factory-trained technical specialists.” They have their toolbox full of diagnostic equipment (OBD-II reader, volt meter, diagnostic manual) and built-in diagnostics software that hopefully identifies faults. Verification engineers also have a toolbox (simulator, test bench, probes, crystal ball, Gentleman Jack) – only they don’t have the benefit of an otherwise known-working system, much less in-depth training and built-in diagnostics. That’s the real challenge facing verification engineers – a lack of deep design knowledge coupled with a lack of visibility into the design.
So how do we fix this? Well, training can help a little … but today’s SoC’s are a conglomeration of many, many specialized IPs. No one can be expected to become expert on all of them. What about built-in diagnostics? That can help – but only after the SoC is somewhat functional, and only after the tests are written. And unfortunately, unlike the car, we aren’t looking for simple faults. We’re looking for corner-case bugs that typically no one thought to test for. So diagnostics can help, but they aren’t the answer.
So where do we go now? We need a built-in solution that combines design knowledge with visibility that can automatically detect when things go wrong – even if we don’t know what wrong looks like.
The good news is: we know what right looks like and from that we can infer what wrong looks like. That leaves design visibility. Fortunately, that technology – assertions – has been around for a long time.
Unfortunately, it falls short. Assertions are typically created by hand and as such, even when created by the experts (if only they had the time), those experts only focus on what they think might fail, leaving all those corner case bugs no one thought to check for, to rise to the surface at the worst possible time (usually right before annual bonuses are calculated). The lack of comprehensive assertions has in fact been the downfall of “assertion-based verification”.
Finally, there’s a new kid on the block that can help: automated assertion synthesis. Putting it all together, we can build a methodology that works as follows:
But, what happens if the design steps outside the “known-good” state space? That means we need to investigate. Typically one of 3 situations arises:
So there you have it…a new methodology that automatically detects coverage holes, configuration errors and design bugs. And though the methodology may not give you a free cup of coffee while you wait for it to detect a problem, you won’t be charged $1,000 every time it does.