Validating LVF data for accuracy and correctness is a key factor to achieving timing closure and silicon success.
Variation modeling has evolved over the past several years from a single derating factor that represents on-chip variation (OCV), to Liberty Variation Format (LVF), today’s leading standard format that encapsulates variation information in timing libraries (.libs).
LVF data is considered a requirement for advanced process nodes 22nm and below. At the smallest process nodes such as 7nm and 5nm, timing attributes such as delays and constraints may change by up to 50%-100% of the nominal delay due to variation. This means incorrect LVF values will likely cause timing closure issues and potential silicon failure if not identified and fixed.
Since LVF characterization involves Monte Carlo analysis of the library cell SPICE netlists, most of today’s characterization methodologies utilize approximations to make runtime feasible for production schedules. Due to this, validating LVF data for accuracy and correctness has become a key factor to achieving timing closure and silicon success.
Mentor, a Siemens Business, provides comprehensive, closed-loop verification for LVF data. Mentor’s Solido Analytics uses machine learning for broad coverage of LVF data, automatically identifying outliers and potential issues within the entire data set. Solido Variation Designer completes the verification loop running full Monte Carlo-equivalent verification on problem areas identified by Solido Analytics.
Variation modeling in .libs has been in use in the semiconductor industry for more than a decade. The main purpose of variation modeling is to account for local silicon differences between what is drawn by circuit layout designers using EDA tools and what is physically manufactured using the photolithography process. Because there is no feasible way to model these effects deterministically today, all of these effects are cumulatively lumped into “on-chip variation” (OCV) values that add pessimism into timing libraries.
Timing library variation modelling methods have changed significantly throughout the years, due to the increasing OCV impact with each smaller process node. While a single derating factor to add pessimism worked for larger process nodes such as 130nm and above, today’s advanced process nodes 22nm and smaller require a more fine-grain assignment of variation values.
For these smaller process nodes, Liberty Variation Format (LVF) has emerged as the leading standard format that is utilized to encapsulate variation information for standard cells, memories and custom macros. If you have an advanced process node library, chances are that your variation modeling for delays and constraints are described in a LVF .lib.
LVF is an extension to the Liberty format that adds statistical variation information to timing measurements.
Nominal timing libraries contain numerous lookup tables that include timing information such as cell delays, transition times and setup and hold constraints for all cells in the library. LVF extends that information with additional tables for early and late statistical variation (sigma) values of each measurement.
During static timing analysis (STA) of digital designs, the timing tool includes the LVF sigma value to add pessimism to the timing path. As a simplistic example, for a timing arc of a combinational inverter, when the STA tool calculates the path delay for setup timing, it will add the value from the late sigma LVF table to the cell delay. This increases pessimism for that timing path.
The following describes what happens during cell delay timing, in our simplistic example:
In setup timing mode: cell fall delay with LVF = nominal cell fall delay + LVF cell fall delay sigma (late)
In practice, STA tools are typically set to multiply the LVF sigma value by 3, to take into account a 3-sigma variation when performing timing analysis.
With the introduction of Moments, LVF also contains other statistical measurements: standard deviation, the mean shift from nominal, and skewness (Figure 1 and Figure 2). This enables more accurate modeling of the statistical distribution of each measured value, especially for non-Gaussian distributions.
Figure 1: LVF .libs with Moments contain the standard deviation values for each measured entry
Figure 2: LVF .libs with Moments also include higher-order statistical moments (e.g., skewness) for each measured entry
At advanced process nodes such as 7nm and 5nm, timing attributes such as delays and constraints may change by up to 50%-100% of the nominal delay, due to the variation component from LVF .libs. This means incorrect LVF data can easily invalidate your chip’s timing analysis, regardless of the accuracy of your nominal timing .libs.
As shown above, LVF variation models contain a large amount of statistical variation information, unlike nominal value timing models. Because of that, characterizing LVF models requires Monte Carlo analysis, resulting in a lengthier characterization process. Using brute-force Monte Carlo to characterize all LVF values in a .lib would result in thousands of Monte Carlo SPICE simulations for each table entry, increasing characterization runtime by several orders of magnitude. For obvious reasons, this is not a feasible approach for characterizing entire .libs.
To make LVF characterization feasible, characterization tools use various techniques, such as netlist reduction and sensitivity-based approximations, to reduce runtime. These approximations are able to reduce runtimes to “only” 5x-10x of nominal timing characterization runtimes. However, these approximations often also introduce inaccuracies to the resulting .libs, leading to incorrect static timing analysis (STA) results and potentially causing silicon failure.
One common source of inaccuracy in LVF data is differences between brute-force Monte Carlo simulations and approximated Monte Carlo simulations for the long-tail of distributions (Figure 3). LVF data is typically measured at 3 sigma (3σ). For long-tail distributions, even with a brute-force Monte Carlo approach, there is a larger difference in output value (e.g., delays and constraints) for a given sigma difference. Therefore, any inaccuracy added by approximation during characterization would amplify these differences in output value, resulting in much more inaccuracy in the resulting LVF data.
Figure 3: Inaccuracies in long tail values used for LVF data can lead to timing differences and potential silicon failure
A comprehensive and reliable methodology to validate LVF data is crucial in today’s design flows. Without this step, the design team can be exposed to faulty or noisy LVF values that may sway timing results by 50%-100% outside of production-accurate ranges, resulting in timing closure issues and delays and potential silicon failure.
A key step in effective variation modelling for standard cells and custom macros for advanced process nodes is a highly reliable validation methodology for the variation models. The verification methodology should have broad coverage to account for variation effects contributors across all process, voltage and temperature (PVT) corners, and also be able to provide full Monte Carlo-equivalent verification for any problem areas.
Traditional validation methods, such as performing brute-force Monte Carlo on randomly-selected LVF data points, are extremely slow. Providing adequate coverage for entire LVF .libs with gigabytes of data is not feasible with this method. Rule-based checks can detect structural and syntax issues reasonably well, but are generally unreliable when it comes to detecting outliers or faulty/noisy LVF data values.
Unlike traditional methods, Mentor’s Solido products provide comprehensive, closed-loop verification for LVF by leveraging machine learning methods. Solido Analytics uses machine learning to analyze a library’s full variation model data set across all PVTs, automatically identifying outliers and potential issues. Solido Variation Designer completes the verification loop by deep-diving into potential problem areas and running full Monte Carlo-equivalent verification on those data points (Figure 4).
The entire process achieves accuracy equivalent to running brute-force Monte Carlo, but with 100X fewer simulations (i.e., it runs 100X faster), and works on 3-sigma and higher sigma targets.
Figure 4: Closed-loop LVF validation: Analytics for full-coverage analysis, finding outliers in a “sea” of billions of values; Variation Designer provides Monte Carlo + SPICE-accurate results across full PVTs
Variation modelling using LVF allows chip designers to encapsulate statistical variation data to supplement nominal timing values, and is required for advanced process nodes 22nm and below.
Many approximations are used during the characterization process for LVF data, due to long runtimes required to run Monte Carlo analysis. This leads to incorrect/inaccurate LVF data that may lead to timing closure issues and silicon failure. Therefore, LVF validation is a crucial step in design flows that use LVF.
Mentor provides comprehensive, closed-loop verification for LVF data. Mentor’s Solido Analytics uses machine learning for broad-coverage LVF data, automatically identifying outliers and potential issues within the entire data set. Solido Variation Designer completes the verification loop by running full Monte Carlo-equivalent verification on problem areas identified by Analytics.
how do we find LVf sigma numbers ? through standard deviation ??
Its a good article.
the delay is estimated as mean + 3.sigma for gaussian distribution. May I know if the delay distribution is non-gaussian then how the STA tools estimates the delay (using skew, mean shift and standard deviation) ? (any equation for this ?)
In the LVF table , we have both sigma and the moments [ mean shift, skewness and the std dev ] .
Both the values are provided to let STA engineers choose whether to sign off with sigma and moments based.