Systems & Design
SPONSOR BLOG

Verifying AI Designs Thoroughly And Quickly

Why determinism, scalability, and virtualization are all key to AI verification.

popularity

You can’t turn around these days without seeing a reference to AI – even as a consumer. AI, or artificial intelligence, is hot due to the new machine-learning (ML) techniques that are evolving daily. It’s often cited as one of the critical markets for electronics purveyors, but it’s not really a market: it’s a technology. And it’s quietly – or not so quietly – moving into many, many markets. Some of those markets include safety-critical uses, meaning that life and limb can depend on how well it works.

AI is incredibly important, but it differs from many other important technologies in how it’s verified.

Three key requirements
AI/ML verification brings with it three key needs: determinism, scalability, and virtualization. These aren’t uncommon hardware emulation requirements, but many other technologies require only two out of those three. AI is the perfect storm that needs all three.

ML involves the creation of a model during what is called the “training phase” – at least in its supervised version. That model is then implemented in a device or in the cloud for inference, where the trained model is put to use in an application.


Fig. 1: Application of AI/ML designs

The training is very sensitive. From a vast set of training examples, you’ll derive a model. Change the order of the training samples by even one, and you’ll get a different model. That different model may work just fine – that’s one of the things about ML; there are many correct solutions.  Each may arrive at the same answer, but the path there will be different. AI training techniques include ways of ensuring that your model isn’t biased towards one training set, but the techniques all involve a repeatable set of steps and patterns for consistent results. Because you can’t verify a model that keeps changing.

Likewise, during verification, the test input patterns must remain consistent from run to run. If you try to take, for example, random internet data from a network using in-circuit emulation (ICE) techniques for use in testing AI models in a networking application, you’re never going to completely converge across design iterations, since you can’t compare results from run to run.

This drives the need for determinism.

AI models themselves involve large numbers of small computations, typically performed on a large array of small computing engines. Their data requirements are different from those of many other applications, changing the way storage is built and accessed. And computation may be done in a cluster for a given model, but an application may have many such models, resulting in an overall fragmented design.

During development, models can grow much larger as they’re optimized and trained on the full range of inputs that they might see. This means that, over the course of a given project – and particularly when a past project is built on a new project – the verification platform must grow or shrink to accommodate the wide range in resources required over the lifespan of the projects – while minimizing any effect on performance.

This drives the need for scalability.

Finally, AI algorithms are new; there is no legacy. That means that, even if you wanted to use ICE, there are few sources of real data from older design implementations that could be used to validate a new implementation. This is all new stuff. As a result, we must build a virtual verification environment.

Even if we could use ICE, a virtual environment is still preferable. During debug, for example, you can’t stop an ICE source’s clock. You may stop looking at data, but the source keeps moving without you. By contrast, a virtualized data source is virtual in all regards, including the clock. So you can stop a design at a critical point, probe around to see what’s happening, and then continue on from the exact same spot. This helps the determinism that we already saw we need.

And so this drives the need for virtualization.

Emulation characteristics
These three requirements – determinism, scalability, and virtualization – align perfectly with Veloce emulators’ three key pillars.

Verification on Veloce emulators can be completely deterministic. Whether testing hardware or software, you can repeat runs over and over, probing hardware and single-stepping code until every aspect of your design’s behavior has been checked out.

Veloce emulators are scalable from 40 million to 15 billion gates. Whatever the size and complexity of your design and models, the platform is a “right sized” resource for verification. As your design scales up, your emulation platform can scale in capacity without performance compromise to ensure you can complete your verification on schedule.

The complete set of information needed for design verification on the emulator can be virtualized. Whether leveraging any of the many ready-built verification blocks or designing your own, you have full visibility and full freedom to control the execution of the verification suite. This includes debug issues and the precise measurements of important system behavior parameters.


Fig. 2: Virtualization enables running AI/ML frameworks under performance benchmarks

In conclusion
Artificial intelligence and ML is forcing us to think in new ways about design and verification. Emulation, already important for many of the markets into which AI is moving, will be an even more important tool in ensuring that your AI-enabled projects get the verification they critically need in a timeframe that gets you to market on time.



Leave a Reply


(Note: This name will be displayed publicly)