Systems & Design
SPONSOR BLOG

A Machine Learning-Based Approach To Formality Equivalence Checking

Automatically determining the right verification strategy based on the design characteristics that may present challenges.

popularity

By Avinash Palepu, Namrata Shekhar and Paula Neeley

After a long and hard week, it is Friday night and you are ready to relax and unwind with a glass of wine, a sumptuous dinner and a great movie. You turn on Netflix and you expect that it will not only have plenty of pertinent suggestions for you, but also the most appropriate one based on all the previous movies and shows that you have watched over time. After all, it is using machine learning!

Machine learning is the process that powers many of the services we use today— recommendation systems like those on Netflix, YouTube, and Spotify; search engines like Google and Baidu; social-media feeds like Facebook and Twitter; voice assistants like Siri and Alexa – the list goes on. In all of these instances, each platform is collecting as much data about you as possible – what genres you like watching, what links you are clicking, etc., and using machine learning to make a highly educated guess about what you might want next. When you are pampered with the sheer ubiquity of machine learning it makes you realize how basic and intuitive the process is. Find the pattern, predict the pattern, apply the pattern.

So now, is it unreasonable to expect learning-based systems like this in our field of semiconductor design – specifically Formal Equivalence Checking? That is exactly what Formality Equivalence Checking R&D is developing. Simply put, we are creating an equivalence checking system that learns from data to make a prediction or decision. To understand how this learning based decision-making process is expected to deliver massive productivity gains, let us first characterize the current challenge in the equivalence checking process.

Synopsys’s synthesis solutions, Design Compiler NXT and Fusion Compiler, provide a broad spectrum of optimization techniques such as retiming, multibit banking and advanced data-path optimization that designers want to take advantage of to achieve maximum QoR. These optimizations cause “hard verifications” which stress equivalence checking technologies.

Hard verifications can come about for a number of reasons, but all of them really come down to one key factor: the two designs being compared are highly dissimilar in structure, even though they perform the same function. When the designs are very dissimilar, the computational cost of proving them equivalent can grow exponentially. These large differences can come about for many reasons: high-level optimization of datapath expressions; different operator architectures between the two designs; don’t-care conditions in the reference design being used for aggressive logic optimization in the implementation; large error-correcting networks creating global redundancies, etc.

Formality is solving this problem with a multi-dimensional strategy that orchestrates between:

  1. Cutting-edge solver strategies,
  2. Distributed processing that can massively parallelize the strategies for verification, and finally,
  3. A machine learning-based predictive approach that can identify the right solver strategy out of the box.

Formality uses the design topology to partition the reference and implementation designs into smaller, independent verification tasks. When Formality distributed processing (DPX) is enabled, these tasks are dispatched to solvers operating in a distributed processing environment. Verification is performed by a variety of “strategies,” or configurations of the solvers that in practice have been shown to produce faster verifications or allow hard verifications to be resolved. Over time Formality has developed a large arsenal of such strategies which are applied in a static ordering until either the logic is verified, or Formality reaches its prearranged time-out limit.

On a vast majority of designs, a few strategies outperform the rest and are reliable choices to top the static ordering. However, harder verifications call for alternative strategies. Choosing the best alternate strategy is not always obvious, and a more serious problem with static ordering is that applying the wrong strategy to any part of a design ends up being costly in terms of performance. Verification may run for hours and remain inconclusive. The inverse also holds true though: choosing the right strategy the first time could result in huge performance gains.

This is the insight behind Formality’s new machine learning approach to dynamic strategy selection within DPX. A supervised learning model is trained to recognize characteristics of a design that may lead to hard verifications or otherwise require the use of alternate strategies. The model provides recommendations, which Formality can use to update the ordering of strategies it applies.

Formality researchers, Namrata Shekhar and Paula Neeley have identified a number of design characteristics that may present challenges to verification efforts. Over 30 such characteristics have already been selected, and they continue to experiment and find more. These characteristics form the feature set that the model uses for training.

A variety of models were tested, but the model ultimately chosen exhibits state-of-the-art performance, scalability, and efficiency with compute time and memory resources. Current prototypes have shown nearly 95% accuracy on test sets, and in the future, researchers at Formality will be using customer specific training data to further adapt strategy recommendations to fit customers’ needs.

This is a very exciting time for Formality. The supervised learning-based approach we have invested in is showing tremendous success in the ability to predict, which means we are sitting on the threshold of providing some fantastic productivity gains to our customers running equivalence checking.

Let us understand this a little more. Designers using 3rd party logic equivalence checking tools typically go through multiple laborious iterations during the setup and verification phases. Without a learning-based predictive approach, they have no choice but to try different disjointed recipes of solvers with the hope that it could resolve verification. But in reality, these recipes may or may not achieve conclusive verification. This journey, which requires deep expert user intervention, can take days to weeks to converge on a single block.

What we hear from our designers is that in an average project, a block goes through about 50 synthesis runs. They would like to run more often, but the limiting factor is really the long or inconclusive equivalence checking runs with the 3rd party tools. Because equivalence checking tends to become the bottleneck, designers are ultimately being forced to switch off optimizations for those tools to successfully verify their design.

In the above scenario, engineering managers may be able to quantify the runtime penalties in terms of machine hours to budget for, but it is impossible to assess the number of man hours and effort required with expert user intervention to achieve successful equivalence checking signoff. That is a metric that is very hard to crack and because of this they choose to be risk averse and end up sacrificing their QoR goals for the sake of verifiability.

Imagine now, a Formality ML based system that can ride on powerful parallelization with distributed processing. Imagine a system that can remove the requirement of expert user intervention to accurately identify the right recipe required to solve the problem. Imagine this is done out of the box with no setup iterations and verification restarts. Imagine a system that can leverage the power of distributed processing that can scale to multiple workers. Imagine machine hour and man hour productivity gains. Imagine!



Leave a Reply


(Note: This name will be displayed publicly)