Does It Take A Catastrophe?

There are four areas of verification that need to be addressed in mixed signal chips.

popularity

What makes a company search for new verification methods and tools?

Sometimes organizations change, proactively, because they are wise and want to avoid problems; but sadly, more often it is a catastrophe that forces change. This was the case with a large U.S. supplier of safety-critical and high-reliability ICs. After a failed chip, it finally moved from simply verifying the analog and digital parts of their chip separately to a “UVM for mixed-signal” methodology using a single kernel, integrated, analog/digital verification tool from Mentor Graphics. This organization then found a number of critical bugs in its next chip that, had they escaped into production, would have killed the chip’s functionality.

But wait! What are they doing using UVM (Universal Verification Methodology), a widely used digital technique, on a mixed-signal chip? How does this help with the analog portion of the chip?

UVM is a metric-driven approach that uses constrained random stimulus generation and other techniques to achieve verification closure on digital designs. The good news is that it has been extended to analyze functional coverage of mixed-signal designs by adding assertion properties for analog signals. This is what we call “UVM-MS.” UVM-MS coupled with an integrated analog/digital verification tool has already amassed an impressive and growing number of successes like the one cited above. In fact, leading organizations are using not only UVM-MS but also a veritable arsenal of techniques and tools to tame the mixed-signal beast in all its forms — from IP to full chip.

But first, they have to straighten out their thinking and treat design and verification as different yet concurrent steps. The goal of design is to implement the requirements, and the goal of verification is to prove that the requirements have been implemented.

Further, there are four domains of verification, each of which must be addressed for mixed-signal chips:

  1. Functional verification to prove the implementation works to specification.
  2. Parametric verification to verify that the specification’s numerical requirements —such as gain, jitter, noise, and frequency— have been achieved.
  3. Implementation verification to ensure the IC meets its “real-world” analog requirements, including thermal, substrate coupling, and electrostatic discharge.
  4. Reliability verification to prove that the functional and parametric requirements will be met over the prescribed life and expected conditions of the chip. This domain includes areas like power dissipation, negative bias temperature instability, and time-dependent gate oxide breakdown.

Lastly, sophisticated organizations hell-bent on avoiding delayed tape outs and chip re-spins are becoming modeling experts. They have embraced the use of event-driven models (including real number modeling) of analog hardware in their digital simulations. They create continuous-time models (behavioral models) to use with their analog simulations, and they have moved from simple analog-digital co-simulation to an integrated, single kernel analog/digital verification tool that embraces sophisticated concepts like UVM and UPF (Unified Power Format). These verification tools also must be flexible enough to allow top-down design and bottom-up verification of multimillion-gate AMS SoC designs, and they must elegantly enable multiple combinations of languages and verification algorithms. The importance of selecting the right AMS verification tool cannot be emphasized enough, for only through automation can design and verification engineers build on previous verification investments.
Is it time to rethink your mixed-signal verification methods and tools, or will you wait for your next catastrophe?



Leave a Reply


(Note: This name will be displayed publicly)