The Next New Memories

A new crop of memories in R&D could have a big impact on future compute architectures.

popularity

Several next-generation memory types are ramping up after years of R&D, but there are still more new memories in the research pipeline.

Today, several next-generation memories, such as MRAM, phase-change memory (PCM) and ReRAM, are shipping to one degree or another. Some of the next new memories are extensions of these technologies. Others are based on entirely new technologies or involve architectural changes, such as near- or in-memory computing, which bring the processing tasks near or inside of memory. Pushing any of them out of R&D involves overcoming a number of technical and business hurdles, and it’s unlikely that all of them will succeed. But some are especially promising and potentially targeted to replace today’s DRAM, NAND and SRAM.

Among the next new memory types are:

  • FeFET or FeRAM: A next-generation ferroelectric memory.
  • Nanotube RAM: In R&D for years, nanotube RAM is targeted to displace DRAM. Others are developing carbon nanotubes and next-generation memories on the same device.
  • Phase-change memory: After shipping the first PCM devices, Intel is readying a new version. Others may enter the PCM market.
  • ReRAM: Future versions are positioned for AI apps.
  • Spin-orbit torque MRAM (SOT-MRAM): A next-generation MRAM targeted to replace SRAM.

There are additional efforts pushing in the vertical direction. For example, some are developing 3D SRAM, which stacks SRAM on logic as a potential replacement for planar SRAM.

While some new memory types are finally shipping, the jury is still out what comes next. “We are starting to see these emerging or next-gen memories finally gaining more traction, but they are still in the early development stages,” said Alex Yoon, senior technical director at Lam Research. “SOT and FeRAM are promising. However, whether it is needed or not will be more determined by economics.”

Current and future next-gen memories face other challenges. “There’s an explosion of new memory types with new materials, storage concepts, and materials technology,” said Scott Hoover, principal yield consultant at KLA. “This presents significant challenges in the areas for material and structural characterization. It is very possible that the cadence of technology advancement and fundamental understanding will be gated by our ability to characterize, measure, control and improve unique materials and structures.”

All told, the current and future next-gen memories may find a niche, but they won’t dominate the landscape. “Emerging memory is not expected to impede significantly on existing NAND or DRAM markets over the next 5-10 years as stand-alone products,” Hoover said.

Replacing SRAM
Today’s systems integrate processors, graphics, as well as memory and storage, often referred to as the memory/storage hierarchy. In the first tier of today’s hierarchy, SRAM is integrated into the processor for fast data access. DRAM, the next tier, is separate and used for main memory. Disk drives and NAND-based solid-state storage drives (SSDs) are used for storage.


Fig 1: Emerging Memories for Pervasive Data and Compute Source: Applied Materials

DRAM and NAND are struggling to keep up with the bandwidth and/or power requirements in systems. DRAM is cheap, but it consumes power. DRAM is also volatile, meaning it loses data when the power is shut off in systems. NAND, meanwhile, is cheap and non-volatile—it retains data when the system is shut down. But NAND and disk drives are slow.

So for years, the industry has been searching for a “universal memory” that has the same attributes as DRAM and flash and could replace them. The contenders are MRAM, PCM and ReRAM. The new memories make some bold claims. For example, STT-MRAM features the speed of SRAM and the non-volatility of flash with unlimited endurance. Compared to NAND, ReRAM is faster and bit-alterable. And so on.

Today, though, the industry is still looking for a universal memory. “For technology developers, we’ve been imagining that one day, some type of universal memory or killer memory will be able to replace SRAM, DRAM and flash at the same time,” said David Hideo Uriu, product marketing director at UMC. “Next-generation memories are still not able to replace any of the traditional memories, but they can combine the traditional strengths of memories to fulfill the demand for niche markets.”

For some time, MRAM, PCM and ReRAM have been shipping, mostly for niche markets. So DRAM, NAND and SRAM remain the mainstream memories.

But in R&D, the industry is working on several new technologies, including a potential SRAM replacement. Generally, processors integrate a CPU, SRAM and a variety of other functions. SRAM stores instructions that are rapidly needed by the processor. This is called Level 1 cache memory. In operation, the processor will ask for instructions from L1 cache, but the CPU will sometimes miss them. So processors also integrate second- and third-level cache memory, called Level 2 and 3 cache.

SRAM-based L1 cache is fast. Latencies are less than a nanosecond. But SRAM also occupies too much space on the chip. “SRAM is facing challenges in terms of the cell size. As you scale and go to 7nm, the cell sizes are 500F2,” said Mahendra Pakala, managing director of the memory group at Applied Materials.

For years, the industry has been looking to replace SRAM. There have been several possible contenders over the years. One of those includes spin-transfer torque MRAM (STT-MRAM). STT-MRAM features the speed of SRAM and the non-volatility of flash with unlimited endurance.

STT-MRAM is a one-transistor architecture with a magnetic tunnel junction (MTJ) memory cell. It uses the magnetism of electron spin to provide non-volatile properties in chips. The write and read functions share the same parallel path in the MTJ cell.

Everspin already is shipping SST-MRAM devices for SSDs. In addition, several chipmakers are focusing on embedded STT-MRAM, which is split into two markets—an embedded flash replacement and cache.

For this, STT-MRAM is gearing up to replace embedded NOR flash in chips. Additionally, STT-MRAM is targeted to displace SRAM, at least for L3 cache. “STT-MRAM is evolving for denser embedding into SoCs, where its smaller cell size, lower standby power requirements, and non-volatility offer a compelling value proposition against the much larger and volatile SRAM used as common on-board memory and last-level cache,” said Javier Banos, director of marketing for advanced deposition and etch at Veeco.

But STT-MRAM isn’t fast enough to replace SRAM for L1 and/or L2 cache. There are some reliablity issues as well. “We believe for STT-MRAM, the access times will saturate around 5ns to 10ns,” Applied’s Pakala said. “When you go L1 and L2 cache, we believe you need to go to SOT-MRAM.”

Still in R&D, SOT-MRAM resembles STT-MRAM. The difference is that SOT-MRAM integrates an SOT layer under the device. It induces switching of the layer by injecting an in-plane current in an adjacent SOT layer, according to Imec.

“When you switch STT-MRAM, you need to push current through the MTJ,” said Arnaud Furnemont, memory director at Imec. “In SOT-MRAM you have two paths, one for the write and one for the read. The read is like STT. You read through the MTJ. The write is not through the MTJ. This is a big benefit because then you can cycle the device and optimize it to have longer life times. The second big advantage is the speed.”

Today, the biggest problem with SOT-MRAM is that it only switches about 50% of the time, which is why it’s still in R&D. “Compared to SRAM, SOT-MRAM can have potential advantages such as higher density and lower power consumption due to its non-volatility,” UMC’s Uriu said. “SOT-MRAM needs to be implemented into cost-effective applications with willing customers.”

To address the problem, Imec has developed a “field-free switching” SOT-MRAM. Imec embeds a ferromagnet in the hardmask, which shapes the SOT track. This enables fast switching at low power.

SOT-MRAM isn’t ready yet. In fact, it will take two or more years before the industry determines whether it’s viable.

Meanwhile, in R&D, work is underway on other potential SRAM replacements, namely 3D SRAM. In 3D SRAM, SRAM dies are stacked on the processor and connected using through-silicon vias (TSVs).

3D SRAM shortens the interconnect distance between the processor and SRAM. Time will tell if 3D SRAM is a viable approach.

DRAM contenders
Like SRAM, the industry for years has been trying to replace DRAM. In today’s compute architectures, data moves between a processor and DRAM. But at times this exchange causes latency and increased power consumption, which is sometimes called the memory wall.

DRAM has fallen behind in bandwidth requirements. Plus, DRAM scaling is slowing at today’s 1xnm node.

“Our applications require a lot of memory. This problem has become worse with machine learning applications. They require a lot of memory,” said Subhasish Mitra, professor of electrical engineering and computer science at Stanford University. “If you could put all the memory on a chip, life would be great. You wouldn’t have to go off chip to DRAM and spend a lot of energy and time trying to access memory. So we have to do something about it.”

There are a number of options here—sticking with DRAM, replacing DRAM, stacking DRAM into high-bandwidth memory modules, or moving to a new architecture.

The good news is that DRAM isn’t standing still, and the industry is migrating from today’s DDR4 interface standard to next-generation DDR5 technology. For example, Samsung recently introduced a 12Gb LPDDR5 mobile DRAM device. At a data rate of 5,500Mb/s, the device is 1.3 times faster than LPDDR4 chips.

Soon, though, OEMs will have other memory choices besides DDR5 DRAMs. A working group within JEDEC (JC-42.4) is developing a new DDR5 NVRAM spec that eventually will enable OEMs to drop the various new memory devices into a DDR5 socket without modification. “The NVRAM specification encompasses carbon nanotube memory, phase-change memory, resistive RAM and theoretically magnetic RAM,” said Bill Gervasi, principal systems architect at Nantero. “We are unifying all the architectures.”

This spec could make it easier to use a new memory type in systems. It’s also a way to replace DRAM.

Still, it’s difficult to replace both DRAM and NAND. They are cheap, proven, and can handle most tasks. In addition, they both have roadmaps for future improvements. “NAND has 5-plus years and 3-plus generations to go. DRAM will slowly scale for the next 5 years,” said Mark Webb, principal at MKW Ventures Consulting. “We have solid new memories that are actually available and shipping. These will grow and augment, not replace, DRAM and NAND.”

One new memory type is gaining steam, namely 3D XPoint. Introduced by Intel in 2015, 3D XPoint is based on a technology called PCM. Used in SSDs and DIMMs, PCM stores information in the amorphous and crystalline phases.

But Intel was late with the technology. Intel is shipping SSDs with 3D XPoint. “I put together a forecast in 2015 based on an assumption that Intel was going to be shipping the DIMMs by 2017. They ended up not doing that until 2019,” said Jim Handy, an analyst at Objective Analysis.

Nonetheless, built around a two-layer stacked architecture, Intel’s 3D XPoint device comes in 128-gigabit densities using 20nm geometries. “It’s a great persistent memory, but it’s not replacing NAND or DRAM,” MKW’s Webb said.

Now, Intel and Micron are developing the next version of PCM, which will appear in 2020. The next-generation 3D XPoint likely is expected to based on 20nm process technology, but it may have four stacks, according to Webb. “We would expect it to be twice the density. Today, it’s 128Gbit. We are expecting 256Gbit for the next generation,” he said.

There are other scenarios. In the future, Objective Analysis’ Handy sees 3D XPoint staying as a two-layer device, but moving to 15nm feature sizes. Time will tell.

While PCM is ramping up, other technologies such as ferroelectric FETs (FeFETs) are still in R&D. “In FeFET memory cells, a ferroelectric insulator is inserted into the gate stack of a standard MOSFET device,” explained Stefan Müller, chief executive of Ferroelectric Memory (FMC).

“Compared to the standard dielectric HfO2 in use today, ferroelectric HfO2 shows a permanent dipole moment, which changes the threshold voltage of the transistor in a nonvolatile manner,” Müller said. “By appropriate choice of read out voltages, either a high current or a low current flows through the transistor.”

FMC and others are developing embedded and standalone FeFET devices. An embedded FeFET would be integrated in a controller. A standalone device may become a new memory type or a DRAM replacement. “FeRAM is good alternative, which uses far less energy than DRAM. But endurance needs to be improved,” Lam’s Yoon said.

It’s unclear what direction FeFETs will go, but there are some challenges here. “Memory cells based on ferroelectric HfO2 can show data retention beyond 250°C, cycling endurance >1010 cycles, write/read speed in the 10ns regime, fJ energy consumption, and scalability to beyond finFET technology nodes,” FMC’s Müller said. “The challenge currently is to merge these metrics into one memory device, and in parallel into arrays of millions of memory cells, and each of these memory cells has to perform more or less identically.”

Meanwhile, for years, Nantero has been developing carbon nanotube RAMs for embedded and DRAM-replacement apps. Carbon nanotubes are cylindrical structures, which are strong and conductive. Still in R&D, Nantero’s NRAMs are faster than DRAM and nonvolatile like flash. But this is taking longer than expected to commercialize.

Fujitsu, the first customer for NRAMs, is expected to sample parts in 2019 with production slated for 2020.

Carbon nanotubes are moving in other directions. In 2017, DARPA launched several programs, including 3DSoC. MIT, Stanford and SkyWater are partners in the 3DSoC program, which aims to develop monolithic 3D devices that stack ReRAM on top of carbon nanotube logic. ReRAM is based on the electronic switching of a resistor element.

Still in R&D, the technology isn’t a DRAM replacement. Instead, it falls under the so-called compute-in-memory category. The goal is to bring the memory and logic functions closer to alleviate the memory bottleneck in systems.

“You have to think about going to the third dimension,” Stanford’s Mitra said. “Otherwise, how are you going to put everything on a chip?”

Currently, the 3DSoC device is a two-layer 3D structure, which places ReRAM on carbon nanotube logic. A four-layer device is due by year’s end. The goal is to bring up production and provide multi-project wafer runs by 2021.

Recently, the group has transferred the technology to SkyWater. The foundry vendor plans to make the devices using a 90nm process on 200mm wafers. “The 3DSoC architecture includes tiers of carbon nanotube-based transistors. They are made in both n and p types to make a CMOS transistor technology,” said Brad Ferguson, CTO of SkyWater. “That can be combined with other tiers of ReRAM memory, which would include a CNT-based access transistor.”

In the fab, carbon nanotubes are formed using a deposition process. The challenge is that nanotubes are prone to variations and misalignments during the process.

“The key challenges that we see and have paths to overcome include three primary things. The first is purity of the carbon nanotubes. There is a lot of variability in carbon nanotubes in the source material. Part of the program is improving the purity of the source material such that we get single-wall semiconducting carbon nanotubes with high purity,” Ferguson said. “The second and third challenges relate to integration as a transistor. That’s variability and stability of the transistor performance.”

The technology is intriguing—if it works. “The fact is that we can scale this technology down after demonstrating this on 90nm. That’s combined with the stated goal of this program, which is to outperform 7nm planar technology. This means if the program is successful, it could reset node scaling on a different curve in terms of complexity, performance and cost,’’ he added.

AI memory
In the works for years, ReRAM once was touted as a NAND replacement. But NAND has scaled farther than previously thought, causing many to re-position ReRAM.

Today, some are working on embedded ReRAM. Others are developing standalone ReRAM for niche-oriented applications. Longer term, ReRAM is expanding its horizons. It’s targeted for AI apps, a DRAM replacement, or both.

One ReRAM company, Crossbar, is developing a standalone device that could potentially displace DRAM. This involves a crossbar-like architecture with ReRAM and logic.

“After talking to customers, especially in data centers, the biggest pain point is DRAM. It’s not NAND. It’s DRAM because of the energy consumption and cost,” said Sylvain Dubois, vice president of strategic marketing and business development at Crossbar. “For high-density standalone applications, we are targeting DRAM replacement in data centers for read-intensive applications. At 8X the density of DRAM and about 3X to 5X cost reduction, this provides great TCO reduction, along with massive energy savings in hyperscale data centers.”

Crossbar’s ReRAM technology also is targeted for machine learning. Machine learning involves a neural network. In neural networks, a system crunches data and identifies patterns. It matches certain patterns and learns which of those attributes are important.

ReRAM is targeted for even more advanced apps. “There are great opportunities to use ReRAM in novel ways such as analog computing and neuromorphic computing, but this is more at the research phase,” Dubois said.

Neuromorphic computing also uses a neural network. For this, advanced ReRAM is attempting to replicate the brain in silicon. The goal is to mimic the way that information is moving in the device using precisely-timed pulses, and there is much research underway in this area, particularly on the materials front.

“The big question is what needs to be done to really enable it,” said Srikanth Kommu, executive director of the semiconductor business at Brewer Science. “There is a lot of research around whether materials can make a difference in this area. Right now, we’re not sure.”

There are two aspects to materials. One involves speed and durability. The second involves manufacturability and defectivity, both of which affect yield and ultimately cost. “A lot of this is based on tolerances and defectivity,” said Kommu. “If defectivity is 100, you need 70% improvement every two years.”

Interest in neuromorphic architectures is growing with the adoption and spread of AI/ML for both power and performance reasons. Leti and ReRAM startup Weebit Nano recently demonstrated a form of neuromorphic computing—they performed object recognition tasks in systems.

The demo used Weebit’s ReRAM technology, running inference tasks use spiking neural network algorithms. “Artificial intelligence is expanding rapidly. We are seeing applications in face recognition, autonomous vehicles, and use in medical prognosis, just to name a few domains,” said Coby Hanoch, chief executive of Weebit.

Conclusion
STT-MRAM has also been proposed as a DRAM replacement. But STT-MRAM or the other new memories won’t displace DRAM or NAND.

Still, current and future generations of memories are worth watching. To date, they haven’t disrupted the landscape. But they are making a dent against the incumbents in the ever-changing memory market. “We are at a place with emerging memory technologies where the race is not yet won,” Objective Analysis’ Handy said.

Related Stories

Using Memory Differently To Boost Speed

Challenges In Making And Testing STT-MRAM

Embedded Phase-Change Memory Emerges

Carbon Nanotube DRAM

A New Memory Contender?

In-Memory Vs. Near-Memory Computing



2 comments

Rajesh Saha says:

Good updates, our team is working with MRAM …..!!!! keep up the updates

Pengfei Guo says:

Nice summary and update! It helps to deepen the understanding of this interesting field!

Leave a Reply


(Note: This name will be displayed publicly)