Seamlessly gliding through abstraction will be a key breakthrough next year.
As 2015 comes to an end rapidly, the key question becomes what the next year will bring. Last year around this time, in my blog “The Next Big Shift In Verification”, I talked about software-driven verification as the next era of verification that follows the eras of directed testing and High-level Verification Language (HVL) driven verification. I also had referred to our System Development Suite as follows: “Key Cadence innovations in the System Development Suite include the joint infrastructure for Incisive simulation and Palladium acceleration, hot-swap capability for software-based simulation, and a unified front-end for Palladium emulation and Protium FPGA-based prototyping that allows one compiler to target multiple different fabrics. There is more to come in 2015.”
Leading the product management team that owns the related products of course somehow helps with the accuracy of predictions, since we drive the roadmaps, but the connections I referred to were already there for a while. We have seen great customer adoption in 2015 and I venture to predict that 2016 will be another breakthrough year.
Earlier I had summarized the key integration between verification engines—dynamic and static—earlier this year in my blog “Top 15 Integrating Points In The Continuum Of Verification Engines”. The graphic here shows the horizontal integration points between the dynamic verification engines in the System Development Suite.
Let’s pick a specific integration point—simulation and emulation. A couple of unique enabling capabilities are:
• A common compile between the engines that eases the transition between simulation and emulation, as well as the mix of RTL in simulation and emulation, resulting in what the industry calls “simulation acceleration”.
• The ability to merge coverage coming from all engines, leading to faster coverage closure.
• The ability to hot-swap between simulation and emulation, balancing software and hardware based execution, eventually enabling “fast forward” to get to the time of point of interest faster.
• Accelerated Verification IP (AVIP) allowing to migrate from simulation VIP to acceleration with AVIPs with a common library.
• The ability to combine traditional in-circuit emulation (ICE), which runs fastest with traditional simulation acceleration, and that allows advanced test-bench capabilities, a capability we call In-Circuit Acceleration, essentially enabling users to re-use their different environments, mix abstractions and balance the use of engines by model availability.
All these integrations are mostly horizontal in nature as they combine engines for verification. There are some vertical aspects here, too—with “vertical” being characterized by mixing abstractions—because we are mixing transaction-level models (TLM) with register-transfer models (RTL), for example, in the hybrid execution of virtual platforms with RTL simulation and emulation.
Reaching even further vertically, is the combination with gate-level and timing simulation. Broadcom describes an approach for gate-level acceleration. They were facing long runtimes for complex timing checks, taking several days just for chip initialization. They needed to execute tests at the full-chip level to check the interactions between blocks as it is a common source of bugs. But the more they executed, of course, the slower the simulation became. They used the “horizontal” integration between Incisive and Palladium to hot-swap between them. They used emulation to accelerate through the initialization phase with no timing annotations and then hot-swapped back to the Incisive simulator to run with timing once the interesting point in time was reached. Voila. They got through the chip-initialization in 45 minutes, which constituted a 100x speed-up over pure simulation.
Reaching to gate-level timing shows nicely how abstractions reach from TLM to gate-level. Vertical integration goes even further, and can, as I had described in my Blog “Why Implementation Matters To System Design And Software”, reach all the way to the .lib technology files as we are combining activity information for long runs in Palladium emulation with .lib based power estimation from RTL using our Joules offering as part of the Cadence implementation flow.
What becomes clear is that users need to seamlessly glide though abstraction and 2016 will be a key breakthrough year for that. The trends look somewhat differently from what I had expected in the past. With growing complexities and with the growing importance of software, I had always expected that system design using abstractions for all aspects of the system would become more dominant. Instead users seem to mix abstractions whenever possible.
RTL simulation gets faster every year. Emulation throughput increases from generation to generation. FPGA based prototyping offers the highest speeds and we have reduced its bring-up time into the week range, allowing it to be used much earlier than in the past. Bottom line, the increasing performance of the dynamic verification engines does reduce the need to abstract more aspects of the system. Users can just combine engines horizontally as needed and combine abstractions vertically from TLM through RTL to gate-level and even that transistor/technology level bringing in .lib files.
Closer integration of verification flows—both horizontally and vertically—will be a key trend in 2016.
Exciting times!
Happy Holidays and a healthy, prosperous 2016 to you!
Leave a Reply