What will be next drivers of verification productivity growth?
In June 2015, I wrote the blog “Towards A Metric To Measure Verification Computing Efficiency” that introduced what we now refer to here at Cadence as the “productivity wheel” for verification payloads—the sequence of “build”, “allocate”, “run” and “debug” that is repeated thousands of times during a project. It was meant to set up the launch of the Palladium Z1 platform later that year, and we are now structuring most of the improvements in hardware-assisted verification and software development directly around these four axes.
Verification Productivity Wheel
The principle is applicable to all verification engines, well beyond pure emulation, and in thinking about 2019 verification trends, overall verification throughput is probably the key common characteristic where we can expect significant growth and improvements.
Looking back to my predictions for 2018, I identified five key trends driving verification: security, safety, application specificity, processor ecosystems and System Design Enablement, all centered around ecosystems. Looking back now as the year is almost over, the key verification highlights that happened in 2018 indeed fit into these five categories as I have reviewed in my Verification Reflections for 2018. Some exceptions and surprises apply.
Will any of those go away? No. They will intensify. Especially the processor ecosystem portion, for which I keep an extra batch of popcorn as it is fascinating to watch. And safety, security and application specificity will remain key drivers—the 5G, AI/ML, server and automotive domains will dictate very specific requirements.
So, it is worth looking a bit deeper for 2019 and understanding where the verification throughput, across the engines formal, simulation, emulation and prototyping, all connected through a verification fabric, is coming from, “under the hood” so to speak. It boils down to four areas: scalable performance, unbound capacity including cloud enablement, smart bug hunting and multi-level abstractions.
For scalable performance of dynamic execution engines, during 2018 we have seen the proliferation of faster single-core and multi-core simulation, the latter especially effective for gate-level and DFT. Simulation goes together with the hardware-assisted engines of processor-based emulation and FPGA-based prototyping, with speed scaling even further. Scaling happens from x86 to Arm based servers to emulation processors to FPGAs. 2019 will see improvements in core performance for all those engines, but also even smarter connection and balancing between the engines as we have seen before. Users will increasingly be shielded from the location at which the verification payload actually executes. Can it wait until next Monday? Simulation will be fine, with scheduled priority so the job will be pushed to simulation. Do I need results within the next 24 hours to confirm some regression failures have been fixed? We better get access to our emulation or prototyping resource.
Unbound capacity for verification jobs is achieved by exploiting massive parallelism everywhere. We see scaling from single-core to multi-core for simulation, adding racks in emulation—our processor-based emulation system now runs up to 4.6 billion gate designs monolithically at several customers—and are adding boards for FPGA-based prototyping. Do expect innovative new ways of prototyping to overcome the traditional speed limitations of prototyping systems that are just connected via cables. And all of this scalability, of course, extends into the cloud with more servers being added for more parallelization for simulation, peak emulation capacity being added to enable more parallel execution at crunch time, etc.
Smart bug hunting comprises both smart collection of debug information as well as combinations of formal techniques with Verification IP, performance analysis and others to optimize the time it takes to fix defects. The data collection aspects include being much smarter about triggering data collection and optimizing the type of data being collected. It is simply not feasible to collect all data and decide what data to look at later—we are talking 10s to 100s of gigabytes per second here—so being smart about data collection is key. Due to the massive amount of data to be handled, this area lends itself very well to techniques utilizing machine learning and artificial intelligence. Partners like Arm have talked about this area in the context of accelerating coverage closure, etc.
Multi-level abstractions have been talked about in the context of virtual platform hybrids for quite some time. This also spans across different engines. At the core of it, the time at which all components of a system, system on chip or subsystem, are available at the same abstraction level—gate, RTL, or TLM—is actually fairly short. In all practicality, for a specific time during a project, some portions of the design will be available as TLM as RTL has not been developed yet, others are at RTL and can be emulated or prototyped, others may actually be available as silicon only and a “simulation model” may not be available. Beyond virtual platform hybrids, really intended to enable a shift left of software development, multi-level abstractions are also used in low-power analysis (RTL and gate) and in mixed-signal simulation (Spice, real number models, etc.). As such, multi-level abstraction is really the path towards what we call System Design Enablement—eventually dealing with systems of systems. With software, power and analog being key aspects of development and only growing in importance, multi-level abstraction will become an even more important in 2019.
True to my favorite William Gibson quote, “The future is already here, it’s just not evenly distributed”, we did see some examples of all this in 2018 already. Highlights were for instance Samsung’s presentation on job scheduling across emulation at CDNLive, MicroSemi’s (now MicroChip’s) way of balancing FPGA and Emulation for their needs, NVIDIA’s way to extend to “push button prototyping” from Palladium Z1 to Protium S1 as well as Arm’s and Vayavya’s presentation on virtual platform hybrids with emulation at DAC. We can expect the speed and improvements in these four areas to pick up quite dramatically in 2019.
Exciting times ahead!
Leave a Reply