Systems & Design
SPONSOR BLOG

Improving Library Characterization Quality And Runtime With Machine Learning

Increasingly specialized process technologies mean it’s time to look at new library characterization flows.

popularity

By Megan Marsh and Wei-Lii Tan

Today’s semiconductor applications, ranging from advanced sensory applications, IoT, edge computing devices, high performance computing, to dedicated A.I. chips, are constantly pushing the boundaries of attainable power, performance, and area (PPA) metrics. The race to design and ship these innovative devices has resulted in a focused, time-to-market-driven effort to improve total design schedule turnaround time while using the most suitable process technology and IP for the job.

Library characterization plays a key role in the drive towards achieving higher PPA metrics in less design time. A large portion of chip area is dedicated to digital logic, memories, I/O, and custom IP designed and implemented by static timing analysis-based digital methodologies that use Liberty models from characterized libraries. Therefore, the ability to characterize libraries efficiently and accurately across all intended process, temperature, and voltage (PVT) conditions is a critical requirement for full-chip or block-level design flows.

The characterization challenge
Using different processes and libraries for different semiconductor applications is hardly a new concept. However, recent years have seen a rise in specialized process technologies serving different needs, such as leading-edge node FinFET technologies for ultra-high performance computing, FD-SOI for low power and IoT applications, as well as other specialized process technologies or variants for automotive, medical, and other applications. This specialization, compounded by application-specific library components and custom IP, leads to a significant increase in the number of libraries, library components, operating PVTs, and types of data being characterized.

The typical library characterization flow (Figure 1) involves running SPICE simulations on all library components (such as standard cells, custom blocks, and memories) across a set of PVT conditions that fully cover the intended operating conditions. This requires up to 10M-100M SPICE simulation runs for the entire library. The output of characterization is a set of Liberty (.LIB) models that fully encapsulate properties such as timing, power, and noise for each of the library components.


Figure 1: The traditional Library characterization flow.

The traditional method of library characterization has served the industry well for decades. Unfortunately, today’s work flows for library characterization and validation have become increasingly expensive in terms of computation and engineering effort due to the complexity and amount of characterized data. As characterization needs exceed the scalability of traditional methodologies, it increases the risk of schedule delays, incomplete verification of characterized results, and re-spins due to chip failures.

The main challenges faced by characterization teams today can be categorized into five major types:

  • Total characterization runtime/throughput
  • Accuracy or quality of characterized results
  • Incremental PVT corner characterization
  • Liberty model validation
  • Debugging and fixing

Total characterization runtime/throughput
Characterizing a new library requires millions of simulations that often requires weeks to months to complete, even with a reasonably large compute cluster. This results in considerable strain on hardware resources and more importantly, longer design tape-out schedules. If library teams cannot keep up with schedule demands of multiple design tape-outs, there is a real risk of library characterization becoming a bottleneck in the production schedule.

Accuracy or quality of results
In many situations, simulation for characterization simply cannot be run at production settings all the time. For example, large analog circuits characterized over many PVTs or different memory compilations from the memory compiler might require that characterization is achieved partially by simulation and partially by a combination of interpolation and applied margins. The downside to this approach are potential timing or functional errors due to inaccuracies, or more commonly, overdesign due to margins leading to non-optimal power/performance/area tradeoffs.

Incremental PVT corner characterization
A common issue faced by full chip/block-level design implementation and signoff (product) teams is the lack of Liberty models at a particular PVT corner. This is especially common when the team is using externally-supplied libraries and have to request additional corners from their supplier. Alternatively, product teams might choose to characterize the new PVT corner themselves, but this requires them to match the library provider’s characterization environment in order to characterize the additional PVT corner correctly. Both approaches incur a hefty schedule cost that is measured in weeks or more of additional schedule turnaround time.

Liberty model validation
Downstream static timing analysis (STA)-based tools operate under the assumption that the Liberty models are “golden.” Therefore, characterized Liberty model files must be verified for accuracy and correctness by the library team. First generation Liberty verification tools perform static, rule-based checks, which can only detect what they are programmed to find. This results in many critical, potentially design-breaking issues to remain hidden, resulting in poor PPA metrics or re-spins due to chip failure.

Debugging and fixing
The old adage goes “knowing is half the battle.” More accurately, knowing is only half the battle.  For library validation, the other half consists of the time and effort-intensive process of tracing issues back to the source, finding other clusters of issues related to the source problem, and fixing all bad data points in the characterized library. A typical library debugging session today involves parsing text-based error/warning entries in log files to pinpoint error sources or developing high-maintenance in-house debugging tools for that purpose. Depending on the complexity of errors encountered, Liberty verification can take up to 50%-80% of total library production schedule.

A machine learning-powered approach to library characterization and verification
The Solido Machine Learning Characterization Suite (MLChar) from Mentor, a Siemens Business, uses production-proven machine learning methods to accelerate library characterization and verification, and employs information visualization (InfoVis) methods to streamline library debugging (Figure 2). The two main components of MLChar are:

  • MLChar Generator: uses machine learning (ML) techniques to accelerate library characterization by 2X-4X, and enables “instant” generation of additional PVTs after initial characterization.
  • MLChar Analytics: is next-generation library validation and debugging: outlier detection using a ML engine combined with an information visualization approach to debugging enables Liberty verification in hours instead of weeks.


Figure 2: MLChar Generator workflow for producing production-accurate new PVT corners from existing seed data.

In design methodologies serving today’s aggressive chip design and production schedules, this new advancement in characterization technology empowers teams to tape-out designs quicker and with less schedule volatility, and reduce the risk of re-spins required to fix post tape-out bugs.

Interested in learning more? Please read our whitepaper: Improving Library Characterization Quality and Runtime with Machine Learning.



Leave a Reply


(Note: This name will be displayed publicly)