Time To Rethink Memory Chip Design And Verification

Changes are needed to improve the reliability of memories and shorten the design time.

popularity

It’s no secret to anyone that semiconductor development grows more challenging all the time. Each new process technology node packs more transistors into each die, creating more electrical issues and making heat dissipation harder. Floorplanning, logic synthesis, place and route, timing analysis, electrical analysis, and functional verification stretch electronic design automation (EDA) tools to their limits. Hundreds of clock and power domains are required to provide flexibility and accommodate IP blocks from diverse sources. On top of all this, system-on-chip (SoC) designs add parallel processing and embedded software to the mix.

Designing leading-edge memory chips requires dealing with many of these same issues while also presenting additional challenges to development teams. Discrete memory chips are usually among the earliest and largest product types developed for each new node. Discrete memory vendors pursue technology scaling aggressively and relentlessly to maintain performance and cost leadership. However, the slowing of Moore’s Law means that smaller geometries alone are not enough to keep pace with the market requirements. The vendors must also constantly innovate with architectural innovations and increasingly faster interfaces.

There are numerous demanding applications that put pressure on memory vendors and drive product requirements. Cloud-based high-performance computing (HPC) and big data applications soak up as much memory as they can in the quest for performance. Artificial intelligence (AI), particularly machine learning (ML), is vital to make use of this data in novel and powerful ways. Cloud-based AI applications require high density, high bandwidth, and multi-port memories to consolidate and analyze data from many sources. Edge AI and Internet of Things (IoT) devices benefit from smaller chips with high power efficiency.

There is probably no better-known example of a demanding application than modern automobiles. Advanced driver assistance systems (ADAS) and the emerging “brave new world” of fully autonomous vehicles have massive AI/ML requirements. They have a corresponding need for large and fast memories that are extremely reliable even in the difficult road environment. Natural language processing (NLP) is another well-known use for ML, adapting and expanding its accuracy over time based on many real-world examples. Finally, smartphones and tablets, especially those using 5G technology, require memories that have both high performance and the lowest possible power consumption to prolong battery life.


Source: Synopsys

All these demanding applications, with their common underlying factors, create many challenges for memory designers. Some long-standing challenges are exacerbated, while the latest technology nodes have introduced additional issues. The challenges can be classified into three broad categories.

Scaling

The first is scaling–of technology, performance, and capacity. Application and market requirements compel the design team to constantly deliver higher performance, higher power efficiency and higher capacity in the memory chips they develop. These requirements drive the migration to new nodes and demand more innovative design techniques. They also directly translate into requirements for EDA tools to handle larger and more complex designs. The tools must provide higher capacity and faster runtimes. They must also support the designers in their quest for better power, performance, and area (PPA) while maintaining a high quality of results (QoR).

Silicon reliability

The second set of challenges involves silicon reliability, which has become a much larger factor for mission-critical applications such as autonomous driving, aerospace and defense, and medical. The new nodes and novel architectures required to meet the memory PPA requirements for these applications introduce a high degree of risk that, if not handled correctly during the development process, could compromise reliability. To minimize this risk at the intersection of innovation and certainty, the design team must employ advanced modeling and verification techniques to bridge the technology-to-design gap as well as the design-to-silicon gap and ensure that the silicon will perform as predicted during the development process. They must develop and enforce the use of robust process design kits (PDKs) to drive accurate design enablement.

On a broader scale, the reliability of memory chips requires full lifecycle management, going beyond pre-silicon design to production manufacturing and even deployment in the field. Analysis of variability and reliability is required to ensure design robustness across a wide set of operating conditions and to minimize defect escapes. Functional safety verification is required to meet ISO 26262 compliance for the automotive market and related standards for other applications. Efficient yield management must ensure that sufficiently reliable silicon can be produced while still meeting cost targets. On-silicon monitoring provides data throughout the test process and, if the end user agrees, the chip can be monitored throughout its operating lifetime. Analytics performed on this data can be used to tweak the manufacturing process or the design itself to maintain or improve reliability.

Memory development turnaround time

Finally, there is increased emphasis on memory development turnaround time (TAT) given the need to cater to a variety of applications in highly competitive end markets. Design teams need to “shift left” in their overall memory development effort in order to satisfy the market requirements with bespoke chips. Design technology co-optimization (DTCO), which historically has not been used in memory development, must be adopted so that designers can rapidly explore and co-optimize new technologies and design techniques before “hardening” these choices. Faster verification runtimes and early awareness into electrical and reliability effects during the design phase are required to accelerate design verification. “Digitization” of memory design by infusing proven, fast, and highly efficient digital design and verification methodologies allows design teams to further accelerate design and signoff. Design-test and design-packaging co-optimization must be adopted to accelerate design, test, and package signoff.

Meeting all these challenges and requirements cannot be accomplished with a patched-together collection of point tools. Memory designers need a comprehensive and holistic EDA solution set that caters to their requirements across the entire memory development lifecycle. It is time for EDA vendors to rethink solutions development for today’s modern and complex memory designs.



1 comments

Karthikeyan Ramamurthi says:

Thanks for highlighting the memory verification problem and the EDA gap in memory verification. I would like to know if Synopsys has a solution or working on a solution to address this issue.

Leave a Reply


(Note: This name will be displayed publicly)