There are three aspects to verification. As an industry, how we balance these aspects has led to a quality crisis, wrapped in a debug loop, inside a constrained random test. It’s time for a rebalancing.
A new motivation for rebalancing came to me during a conversation I had a couple weeks ago at the Agile Alliance Technical Conference. I had the chance to compare my day-to-day responsibilities with those of Lisa Crispin. Lisa is a software test expert that is very well regarded within the Agile Development community. Think of her as a Harry Foster/Janick Bergeron type; someone people look to for guidance. In chatting with Lisa, I realized that while verification engineers and software testers are filling similar roles within our respective teams, our effectiveness varies wildly based on how we set expectations.
I like to consider what’s expected of me – a verification engineer – as a combination of verified intent, verified implementation and corner case exploration; these are our three aspects of verification. Verifying intent generally happens in the first part of a feature’s lifetime, starting back at the analysis and definition phase. Exploratory testing to push the boundaries happens closer to when a feature is set to be released, after it’s thoroughly tested but before it’s shipped. In the middle is exhaustively verifying implementation, to ensure the RTL we write does what we intend it to do. Hardware verification tends to tackle these aspects in order though good teams pay attention to all three continuously.
Lisa, a software tester, spends most of her time with intent and exploration. She works with developers as they’re fleshing out feature details before the coding starts. She offers input on usage model, definition and acceptance criteria. She’s also there to watch for holes in the usage model like missing features or interactions a customer might reasonably expect. For the exploration, Lisa writes and carries out exploratory charters, which are kind of like a set of what-if experiments, designed to target corner cases or potential problem areas.
Oddly, verifying implementation is a distant third on Lisa’s to do list. I say it’s odd because verifying implementation tends to dominate my to-do list! I think the stark difference in focus boils down to how we set expectations. When it comes to implementation quality on Lisa’s team, the burden of proof belongs to developers. Simply put, when someone writes code it’s expected to work. This is fundamentally opposite to the expectation we set in hardware development. As an industry, the burden of proof falls on the shoulders of the verification team. A buggy implementation is the expectation (which we then verify with our buggy testbenches).
On Lisa’s team, meeting expectations means using development practices that produce high quality code. Two practices worth noting are test-driven development (TDD) and pair programming. TDD and writing code in pairs is what saves Lisa from the quagmire of implementation detail and lets her focus on the high value exercises of testing intent and exploratory testing.
In effect, Lisa’s team has turned verification inside out relative to what we do. While she leans toward making her team’s product better, most of our time is consumed by the bug hunt.
Think what you will about TDD and pairing; but looking at the bugs rates they produce it’s hard to argue they aren’t effective. While they aren’t the norm in hardware development yet, I’m happy to say that hardware design and verification engineers are increasingly experimenting with both. To me it’s a valuable first step in turning our verification approach inside out, for the better.
Now if you’re uncomfortable with the hardware/software analogy or maybe you don’t buy into TDD or pairing, feel free to think in a different direction. But don’t ignore the paradigm we’ve created and what it produces! Ultimately, how we spend our time and money is decided by who assumes the burden of code quality. Today, we pass the buck. That’s wrong… and it’s holding us back.