Systems & Design
SPONSOR BLOG

Memory Evolution Drives Requirements For Design Technology Co-Optimization

Optimizing memory at advanced nodes requires it to be designed in the context of other technology.

popularity

By Ricardo Borges and Anand Thiruvengadam

As new technology nodes have become available, memory has been one of the most aggressive semiconductor applications to adopt advanced process technology. The relentless demand by users of electronic devices for more memory has ensured that investments in new nodes and processes would be quickly repaid by massive sales volumes. As each new node came online, memory capacity grew dramatically in accord with Moore’s Law, and performance per watt increased due to Dennard scaling. Memory designers could adopt new technologies with confidence that their resulting products would be denser and faster. The custom nature of memory design meant that handcrafting of new cells, cell arrays, and the sensing and control circuits on the periphery was required, but the results were fairly predictable.

This is not to say that memory design has been easy; there have been many important innovations beyond scaling for new nodes. It is hard to imagine today’s electronic devices without the multiple generations of double data rate (DDR) technology or content-addressable memory (CAM) for caches. However, the design and innovation of new memories was largely independent of process development. Design teams developed models for scaled devices and interconnects while the process was being qualified, so memory design could proceed in parallel. This enabled rapid adoption of new technologies and ensured that memories could remain on the leading edge of semiconductor development.

With today’s deep submicron technologies, life has become more complicated for memory designers. Much closer cooperation between the design and process teams is necessary to continue the required improvements in memory density and performance. There are several factors and trends driving this evolution:

  • The slowing of Moore’s Law means that memory designers can no longer count on regular, predictable benefits from scaling alone.
  • The end of Dennard scaling has led to early design/architectural optimization, more detailed optimization of physical layout design rules, and development of new process recipes.
  • The slowing of supply voltage scaling and the increasing effect of leakage currents have limited reductions in device power at new nodes.
  • Bitline and wordline parasitics have an increased effect in DRAM arrays and the attendant need to maintain sufficiently high storage capacitor values drives higher aspect ratio capacitor structures with the integration of materials with higher dielectric constants.
  • DRAM scaling has become more challenging due to cell capacitance, cell contact resistance, and row hammer effects.
  • Scaling of DRAM and NAND periphery is increasingly impacted by process variability, which reduces the design margin for the sensing circuits.
  • The number of layers in 3D NAND devices has grown to around 200 and is projected to increase to more than 500, as shown in Figure 1, bringing new challenges in the high aspect ratio etching processes used to define the vertical channels and driving the research of process techniques to improve channel conductivity.


Fig. 1: Growth in number of layers in NAND memories.

All these effects have produced a technology-design gap that has resulted in suboptimal devices and process recipes, suboptimal memory performance, and late-stage design changes that increase time to market (TTM). Minimizing this gap requires optimization of materials, processes, and device structures, and this need will grow even stronger with emerging memory technologies such as resistive random-access memory (RRAM), phase-change memory (PCM), magnetoresistive RAM (MRAM), and Ferroelectric RAM (FeRAM).

Effective memory design can only be accomplished with a much closer collaboration between process and circuit development, known as design technology co-optimization (DTCO). A memory DTCO flow must simulate the impact of process variability in the critical high precision analog circuits in the memory periphery, such as the sense amplifiers. This flow must include the following phases:

  • Transistor modeling – technology computer-aided design (TCAD) simulates the fabrication process with its variability sources, followed by simulation of the transistor electrical characteristics and generation of data for subsequent extraction of a SPICE model.
  • Parasitic extraction – a 3D representation of the circuit is created, using as inputs a description of the interconnect process flow and a layout of the circuit element (for example a sense amplifier), and is fed to a parasitic field solver that extracts a circuit netlist and annotates it with RC parasitics.
  • SPICE simulation – the SPICE model and annotated netlist are simulated, with variability modeling capabilities used to simulate variation-aware circuit metrics.

This flow develops a virtual process development kit (PDK) that enables early and rapid design exploration before wafers in the new process are available. The tight fusion of TCAD and SPICE technology provides design enablement with high-quality models that can be further refined when wafers are available and fabrication data can be gathered. The layout can be created early from virtual PDKs, with power, performance, and area (PPA) assessed from both pre-layout and post-layout netlists. Design closure is enabled by moving optical proximity correction (OPC) simulation, along with lithography rule check (LRC) and debug, into the layout process. The result is true lithography-aware custom memory design capable of handling the latest deep submicron nodes and emerging memory technologies.

Synopsys provides a memory DTCO solution that meets all these requirements, and more. As shown in Figure 2, the heart of this flow is Synopsys PrimeSim™ SPICE, a high-performance SPICE simulator for analog, RF, and mixed-signal designs including memories. The transistor modeling phase uses Synopsys Sentaurus Process, which simulates the transistor fabrication steps, Synopsys Sentaurus Device, which simulates transistor performance, and then Synopsys Mystic to extract SPICE models from the TCAD output. The SPICE netlist is generated by Synopsys Process Explorer process emulation and the Synopsys Raphael FX resistance and capacitance extraction tool.


Fig. 2: Synopsys DTCO flow for memory sense amplifiers.

The Synopsys DTCO solution also includes a data-to-design workflow that enables fab data to be directly consumed by SPICE and FastSPICE simulators from PrimeSim Continuum for quick design PPA assessments. This allows process technologists and design engineers to skip the compact model extraction step which can be cumbersome and time consuming for non-standard process technologies. Design engineers can perform a more complete design PPA assessment with the traditional DTCO flow or the data-to-design workflow with early layout and post-layout simulations using Synopsys Custom Compiler, Synopsys PrimeWave Design Environment, and Synopsys PrimeSim Continuum simulators.


Fig. 3: Synopsys data-to-design flow with TCAD-to-SPICE direct link.

In summary, memory design is facing new challenges as devices move to smaller nodes and incorporate novel emerging technologies. Designing independently of process development will no longer yield optimal results, so a technology-aware design development process is required. The Synopsys DTCO flow provides the industry’s most complete TCAD tools, best-in-class SPICE simulators, and a modern and open design environment. For more information, click here.

Anand Thiruvengadam is a Director of Product Management & Marketing and heads the Solutions and Go-to-Market functions for the memory market segment at Synopsys. Anand’s professional experience spans high-tech product strategy, product marketing, product management, business development, and engineering. Prior to Synopsys, he worked at PriceWaterhouseCoopers (PwC) as a management consultant, focusing on strategic and operational transformation initiatives for various enterprises in the consumer electronics, networking, storage, enterprise software, and semiconductor industries. Prior to PwC, Anand held engineering leadership roles at leading semiconductor companies.



Leave a Reply


(Note: This name will be displayed publicly)