How eMRAM Addresses The Power Dilemma In Advanced-Node SoCs

IoT and edge devices are shifting away from traditional memory technologies.

popularity

By Rahul Thukral and Bhavana Chaurasia

Our intelligent, interconnected, data-driven world demands more computation and capacity. Consider the variety of smart applications we now have. Cars can transport passengers to their destinations using local and remote AI decision-making. Robot vacuum cleaners keep our homes tidy, and smartwatches can detect a fall and call emergency services. With high-volume computations comes greater demand for high memory capacity and an absolute necessity to reduce system-on-chip (SoC) power, especially for battery-operated devices.

As data gets generated by more sources, the data needs to be processed and accessed swiftly—especially for always-on applications. Embedded Flash (eFlash) technology, a traditional memory solution, is nearing its end, as scaling it below 28nm is highly expensive. In response, designers of IoT and edge-device SoCs seek a low-cost, area- and power-efficient alternative to support the growing appetite for memory.

Embedded Magneto-Resistive Random Access Memory (eMRAM) emerged about two decades ago. However, it is now undergoing an uplift in utilization thanks to its high capacity, high density, and ability to scale to lower geometries. In this article, we’ll look closer at how IoT and edge devices are creating shifts away from traditional memory technologies, why eMRAM is taking off now, and how Synopsys is helping ease the design process with eMRAM.

How is the memory landscape changing?

While memory is ubiquitous in our smart everything world, the memory technology landscape is changing quickly, with power becoming a key criterion. High-performance computing, cloud, and AI applications must conserve dynamic power, while mobile, IoT, and edge applications are concerned about leakage current. Moving to smaller process technologies typically provides power, performance, and area (PPA) benefits; however, dynamic and leakage power scale at smaller nodes differently. As a result, traditional memory technologies that have long been reliable for many designs, but consume significant amounts of energy, are proving inadequate for advanced-node SoCs supporting space-constrained designs such as those in the IoT and edge spaces.

eFlash has been a conventional and prominent source of high-density, on-chip non-volatile memory (NVM) for years. However, eFlash is too taxing on the system power budget for small, battery-powered applications. What’s more, the cost of enabling Flash technology beyond 28nm is quite high, limiting the ability of design teams to move to advanced technology nodes.

The semiconductor industry has continued researching different NVM solutions, like spin-transfer torque MRAM (STT-MRAM), phase-change RAM (PCRAM), and resistive RAM (RRAM). One particular type—eMRAM—has emerged as an ideal fit for the demands of many advanced-node SoCs.

How eMRAM meets the need for low-power memory

Unlike conventional embedded memories like SRAM and Flash, which store information via an electric charge, eMRAM stores data via its spin. The spintronic nature of eMRAM comprises ferromagnetic and non-magnetic materials which form a magnetic tunnel junction (MTJ). The MTJ continues to hold its polarization when its power is removed, retaining the stored data and consuming much less system-level power. Unlike SRAM, eMRAM offers a smaller area, lower leakage, higher capacity, and better radiation immunity. Given this, a single die can boast more memory with eMRAM, or a design utilizing eMRAM can be smaller with the same amount of memory than if it had used SRAM. And against options like PCRAM and RRAM, eMRAM is less sensitive to high temperature, provides better production-level yields, and offers longer endurance (marked by the ability to retain data over multiple read/write cycles over many years). Major fabs already have 22nm FinFET-based eMRAM in production.

Fig. 1: A unified eMRAM solution can lead to lower latency and interface power.

eMRAM has roots in MRAM technology, which has been around for decades. As processors moved from 28nm down to 22nm and prevailing memory technologies could no longer scale to keep pace, this presented an inflection point where MRAM technology—in eMRAM—was rediscovered. eMRAM offers clear advantages for space- and power-constrained applications like IoT and the edge. Over time, as its speeds improve, eMRAM could broaden its reach to become a universal memory resource. Automotive designs, for instance, rely on MCUs that need embedded memory, traditionally eFlash. At 22nm and below, eMRAM offers a reliable option at automotive temperature grades. Industrial and other high-performance embedded applications could also experience advantages moving to eMRAM.

Addressing eMRAM design challenges

While eMRAMs present attractive advantages, designers should also know what they must address when designing with this type of memory. For one, magnetic immunity needs to be accounted for. For memory designers, this involves testing the MRAM and its immunity level, the units measured in Gauss or Oersted, and informing their chip design customers of this spec. Any elements near the chip that can become magnetic—such as inductor coils—can impact eMRAM performance, so chip designers must design those elements with enough distance from the eMRAM. Chip packaging with a magnetic shield can also protect the eMRAM from end devices with a large magnetic field, such as refrigerators.

Read activity is particularly sensitive, so writing to the device may disturb the read activity. Error code correction (ECC) can help lower failure rates by addressing the process variation that leads to reliability issues.

Magnetic shields and ECC are two techniques that help address the challenges of designing with eMRAMs. For long-lasting endurance and reliability of on-chip implementations of eMRAM, built-in self-test (BIST), repair, diagnostic solutions, and a robust silicon qualification methodology can go a long way. Time to market is another important consideration. For faster turnaround time of eMRAM designs, designers can turn to compiler IP that can quickly compile eMRAM hard macros.

Achieving faster turnaround time of reliable, low-power memory designs

As a longtime developer of memory solutions, Synopsys provides a variety of solutions to help accelerate the development of high-quality eMRAM, including:

  • Synopsys eMRAM Compiler IP, which provides a configurable memory IP solution, with options to optimize instance size and different features like an ECC scheme. The IP is designed to deliver just-in-time compilation of eMRAM hard macros within a few minutes, reducing turnaround time and accelerating time to market.
  • Synopsys Self-Test and Repair (STAR) Memory System, which provides a full suite of test, repair, and diagnostic capabilities for eMRAM, optimizing test time without sacrificing test coverage. Configurable memory BIST and repair algorithms mitigate MRAM defects.
  • Synopsys STAR ECC Compiler IP, which improves in-field reliability by enabling multi-bit detection and correction. The IP can also be used to maximize manufacturing yield due to the stochastic nature of eMRAM technology.
  • Synopsys Silicon Lifecycle Management Family, which provides insight into the silicon to allow tweaks to performance levels or margins for better operation.

Indeed, memory will continue to be integral to every electronic device or system we use. eMRAM is poised to take on even more significant roles in next-generation, high-performing embedded applications by delivering high capacity, low power, and process technology scaling.

For more information, visit Synopsys Foundation IP.

Bhavana Chaurasia is a staff product line manager in the Synopsys Solutions Group.



Leave a Reply


(Note: This name will be displayed publicly)