Sorting Out Next-Gen Memory

A long list of new memory types is hitting the market, but which ones will be successful isn’t clear yet.


In the data center and related environments, high-end systems are struggling to keep pace with the growing demands in data processing.

There are several bottlenecks in these systems, but one segment that continues to receive an inordinate amount of attention, if not part of the blame, is the memory and storage hierarchy.

SRAM, the first tier of this hierarchy, is integrated into the processor for cache to enable fast data access. Then, DRAM, the next tier in the hierarchy, is used for main memory. And finally, disk drives and NAND-based solid-state storage drives (SSDs) are used for storage.

In 2007, SSDs entered the data center. SSDs helped reduce the growing latency gap between DRAM and disk drives in systems by a factor of 10, according to Forward Insights. But with an ongoing explosion of data, there is roughly a 1,000X latency gap between DRAM and NAND in today’s systems.

“The market is now trying to consume more bits at a faster rate,” said Jon Carter, vice president of storage emerging memory products at Micron Technology. “We’ve innovated the network, storage stack and the software stack. Now, the bottleneck is the underlying memory.”

In fact, for years, the industry has been searching for a new memory type that can solve the latency gap. In theory, the technology would have the performance of DRAM and the cost and nonvolatile characteristics of flash.

Some refer to that as storage-class memory. As before, the next-generation memory types include MRAM, phase-change, ReRAM and even carbon nanotube RAMs.

Today, some of these technologies are shipping or sampling in the market. But some will never make it due to cost and technical factors. “Emerging memories have been in the lab for a decade,” said Gregg Bartlett, senior vice president of the CMOS Platforms Business Unit at GlobalFoundries. “Emerging memories have always offered a lot of promise. But the question is, can you manufacture them? Now, it is making a transition into mainstream technology.”

So, which technologies ultimately will prevail? There are no simple answers. To help OEMs get ahead of the curve, Semiconductor Engineering has taken a look at the status of the new memory types and the challenges ahead.

Wanted: Universal memory
Years ago the industry hoped to develop a so-called universal memory, which in theory was a single device that could replace DRAM, flash and SRAM. At that time, the universal memory candidates were the same technologies vying for dominance today—MRAM, phase-change, and ReRAM.

But because of the complexity and soaring I/O requirements in systems, there was no single next-generation memory type that could replace existing memory. And conventional memory scaled much further than previously thought, pushing out the need for the next-generation memory technologies.

Today, though, there is room for a new memory type, thanks to the growing latency gap in systems. A new technology won’t replace everything, but it could play a role in specific applications and would co-exist with today’s memories.

“I would expect these advanced memories to first find homes in applications that recognize or leverage one of their unique advantages,” said David Fried, chief technology officer at Coventor. “I would also expect there to be several different technologies that win in the end, just as SRAM, DRAM and NAND have coexisted for generations.”

Others agree. “With the IoT, self-driving cars and cloud storage, the appetite for a different class of memories is exploding,” said Harmeet Singh, vice president of dielectric etch products at Lam Research. “We expect the next-generation memories will co-exist with other memories, with 3D NAND being the workhorse for nonvolatile memory.”

But not all companies, or all of the newfangled memory types, will succeed. “There is only a certain amount of room for these technologies,” said Er-Xuan Ping, managing director of memory and materials within the Silicon Systems Group at Applied Materials.

3D XPoint, a next-generation memory type, could find a place for certain applications, according to Ping. “The other memories have a chance, but they need to pick up the pace to catch up,” he said.

And in the near term, it’s unlikely that the next-generation technologies will replace existing memory. “Today’s memory has been highly optimized in terms of the economics. DRAM is cheap for its functionality. NAND is too,” he said. “Anything new to replace that in terms of cost is very difficult.”

3D XPoint versus Z-SSD
Meanwhile, there are several ways to reduce the latency gap in systems. One approach is to develop faster SSDs using next-generation NAND and/or a new memory type.

Today, planar and 3D NAND are used in SSDs. Planar NAND is running out of steam at the 1xnm node, prompting the need for a replacement technology. That replacement is 3D NAND, which resembles a vertical skyscraper in which horizontal layers are stacked and then connected using tiny vertical channels.

SSDs have been moving into PCs. But the real action is taking place in the enterprise, where SSDs are displacing traditional disk drives. Disk drives for high-end servers have rotation speeds of 10,000 and 15,000 RPMs.

“In general, NAND flash pricing continues to decrease year-over-year,” said Ryan Smith, director of NAND product marketing at Samsung Semiconductor. “The price points have become attractive enough for SSDs to displace 10K and 15K drives rapidly.”

Generally, though, NAND has some issues. Basically, a NAND device must be programmed. Data is written into a tiny block in the device. Generally, the data can be retrieved and read, but it can’t be updated in the block. To update it, the block must be erased and re-written.

At times, this process is too slow, at least for some high-end applications, according to experts. As a result, the market is ripe for a new memory technology that can help speed up the system.

For this, there are several candidates, such as 3D XPoint, ReRAM and now Z-SSD. Co-developed by Intel and Micron, 3D XPoint is supposedly up to 1,000 times faster and has up to 1,000 times greater endurance than NAND, and is 10 times denser than conventional memory.

3D XPoint fills the gap between NAND and DRAM. Intel will incorporate a 3D XPoint device in an enterprise SSD and a DIMM for servers. Meanwhile, Micron will use the device for an enterprise SSD. “3D XPoint is an enterprise product,” Micron’s Carter said. “You will see it in the high end of the SSD space. It will drive mission-critical workloads.”

3D XPoint faces some challenges, however. First, Intel and Micron will need to develop an entire new ecosystem, such as the controllers, for the technology. Then, Intel and Micron will need to drive down the costs for 3D XPoint, but this won’t happen overnight. “I’m expecting it to be a money-losing proposition for the first couple of years after it comes out,” said Jim Handy, an analyst with Objective Analysis. “And if it doesn’t have pricing below DRAM, then it’s not going to make any sense in the memory hierarchy.”

And what exactly is 3D XPoint? So far, Intel and Micron haven’t disclosed many details about the technology.

Initially, a 3D XPoint device will come in 128-gigabit densities. The first devices will consist of two stacked layers. Perpendicular conductors connect the two layers. The conductors have a separate switching element and memory cell. Each memory cell stores a single bit of data.

Based on various calculations, 3D XPoint is a 19nm technology, according to Seshubabu Desu, a memory expert and chief technology officer at 4DS, a ReRAM startup. Desu recently gave a presentation, where he provided insights into the new memory types.

“(Both Intel and Micron) insist that 3D XPoint is not phase-change memory,” Desu said. “They do say the switch, or the selector, is an ovonic threshold switch. That’s a chalcogenide-based material.”

An ovonic switch, a two-terminal device, is reminiscent of phase-change memory, sometimes called PCM. PCM stores information in the amorphous and crystalline phases. It can be reversibly switched with an external voltage.

All told, 3D XPoint incorporates the characteristics of both phase-change and ReRAM. “(3D XPoint) is a flavor of PCM,” Desu said.

3D XPoint could be scaled from 19nm to 15nm, but pushing it to 10nm and beyond remains challenging, according to experts. “The maximum density you can get if you use the current lithography techniques is around maybe 4 or 6 stacks. That gives you 512 gigabits,” Desu said.

Meanwhile, in response to 3D XPoint, Samsung recently rolled out a new technology called Z-SSD. Based on NAND, Z-SSD is targeted for high-end enterprise SSDs. Z-SSD incorporates a new circuit design and controller, enabling 4X lower latency and 1.6X better sequential reading than existing high-end SSDs.

Samsung is exploring various next-generation memory types in the lab, but the company said that it makes more sense to develop a new and faster version of NAND for high-end applications. “The reason why we picked a NAND-based technology is because it’s a proven technology,” Samsung’s Smith said. “We wanted to pick something that was in production and already had the efficiencies built in. So in other words, it has the cost structure that the market wants.”

Still, Samsung has yet to disclose the details behind Z-SSD. “Z-SSD is Samsung’s counter to 3D XPoint,” said Alan Niebel, president of Web-Feet Research. “Z-SSD uses a form of V-NAND (3D NAND), which is probably TLC with shorter bit lines, enabling this to be low-latency NAND.”

Embedded memory battle
Samsung, according to Niebel, is working on other memory types. “They will ship (Z-SSD) first,” he said. “And later, they will bring out MRAM and ReRAM as the infrastructure is able to support them.”

Others also are working on ReRAM, which is supposedly the successor to 3D NAND. ReRAM is nonvolatile and based on the electronic switching of a resistor element material between two stable resistive states. ReRAM delivers fast write times with more endurance than today’s flash.

ReRAM is a difficult technology to develop. In addition, 3D NAND will likely extend further than previous expectations, pushing out the need for ReRAM as a possible replacement for NAND.

Initially, though, ReRAM and other next-generation memory types are moving into the embedded market. Today, the embedded market is dominated by traditional flash memory. Embedded flash is used in microcontrollers (MCUs) and other devices.

The mainstream market for embedded flash is at 40nm and above, although the industry is beginning to migrate towards smaller geometries. “The focus on the development side is on 28nm,” said Walter Ng, vice president of business management at UMC.

There are other changes as well. “There is a set of customers using nonvolatile memory in the MCU space. They have to hit a particular time to market. In general, we are driving with those customers on the more traditional solutions. But they also have an interest in some of the other and more unique solutions,” Ng said.

For embedded, the new solutions include ReRAM, MRAM and carbon nanotube RAMs. In 2013, Panasonic shipped the world’s first ReRAM for embedded applications. It integrated a 180nm ReRAM device with an 8-bit controller.

More recently, Adesto is shipping a ReRAM-based technology called conductive bridging RAM (CBRAM). Targeted for the EEPROM replacement market, Adesto’s latest CBRAMs have 50 to 100 times lower power than comparable memory products.

Another company, Crossbar, will shortly ship an 8-megabit ReRAM for the embedded market. Build on a 40nm process, Crossbar’s ReRAM is based on a 1T1R (one transistor/one resistor) technology.

“Embedded ReRAM is much faster and has lower power than existing flash technologies,” said Sylvain Dubois, vice president of marketing and business development at Crossbar. “It’s a bit-alterable memory. You don’t have to erase a full block and program a full page to update your memory.”

Crossbar’s next chip is 1-gigabit ReRAM, based on 28nm. “It could be for high-density embedded as well as standalone applications,” said George Minassian, chief executive of Crossbar. “It will go after NOR.”

Fig. 1: Crossbar’s ReRAM architecture uses non-conductive, amorphous silicon as the host material for metallic filament formation. Source: Crossbar

Then, in another development, Fujitsu recently licensed Nantero’s NRAM, a nonvolatile RAM based on carbon nanotubes. NRAM claims to be faster than DRAM and nonvolatile like flash with essentially zero power consumption.

Today, the industry continues to develop traditional embedded flash, but it is also working on the next-generation technologies. So will the next-generation memory types displace traditional embedded flash? “I don’t believe, nor does anybody else feel, that a switch will be thrown overnight and we will go from some of the traditional solutions to some of these other solutions,” UMC’s Ng said.

What about MRAM?
Meanwhile, the momentum is building for a second-generation MRAM technology called spin-transfer torque MRAM (STT-MRAM). MRAM uses the magnetism of electron spin to provide non-volatility. MRAM delivers the speed of SRAM and the non-volatility of flash with unlimited endurance.

Several companies are working on STT-MRAM, although Everspin is still the only vendor shipping it. Everspin refers to its technology as ST-MRAM. Everspin’s latest ST-MRAM is a 256-megabit device with a DDR-3 interface. It is based on a 40nm process from its foundry partner, GlobalFoundries.

Fig. 2: STT-MRAM memory cell and dynamic spin motion. Source: University of Minnesota, College of Science & Engineering.

Everspin’s new device could solve a major problem. In simple terms, SSDs use a DRAM-based buffer to help speed up the system. But if the system loses power, the data is at risk. To solve the power loss issues, SSDs will also incorporate capacitors, but this adds cost to the system.

To solve that issue, ST-MRAM could be incorporated in the write-buffer socket in the SSD. ST-MRAM is a persistent, nonvolatile memory. “Then, you can eliminate the need for these banks of capacitors,” said Joe O’Hare, director of product marketing at Everspin.

Everspin is also developing a 1-gigabit ST-MRAM, based on a 28nm process from GlobalFoundries. At the same time, GlobalFoundries is developing an embedded version of Everspin’s ST-MRAM technology.

Initially, GlobalFoundries’ eMRAM technology will be offered on its 22nm FD-SOI platform. “We will do both code storage and working data storage,” GlobalFoundries’ Bartlett said. “We will eventually be able to do very large arrays of L3 cache with a footprint smaller than 6T STRAM.”

To be sure, there are several applications for MRAM. “STT-MRAM has several clearly-visible, high-volume and low-hanging fruits,” said Barry Hoberman, chief executive of Spin Transfer Technologies, a developer of STT-MRAM Technology. “I see great opportunities in storage, mobile, automotive and IoT. STT-MRAM is well-suited for both stand-alone and embedded applications.

“In the very long-term, I do think that the MRAM bit-cell will become smaller and less expensive than the DRAM capacitor cell. But until that happens, DRAM will continue to have a place where ultimate low-cost and density are critical, and non-volatility, low power, and instant ‘on’ are not valued,” Hoberman said. “Until that day, however, STT-MRAM will be an excellent candidate to replace DRAM in places where volatility, refresh power, and refresh duty cycle are burdensome for system designers.”

So will STT-MRAM ever replace DRAM? Not in the near term. For one thing, the cheapest alternative may simply be reconfiguring or extending the life of existing DRAM technology.

Marvell’s approach is to improve the functionality of DRAM by adding Final Level Cache, automatically loading code as needed and purging code that isn’t needed. “This is optimized to improve the caching rate by up to 99%,” said Sheng Huang, Marvell’s director of technical marketing. “You do this by taking advantage of an algorithm, which allows us to design a smaller lookup table.”

Rambus likewise is trying to eke more out of DRAM by altering the basic data architecture. “The bottleneck is moving the data,” said Steven Woo, vice president of solutions marketing at Rambus. “It’s better to move the processing to the data, which is smaller. We need to think about how to address more modern bottlenecks.”

Still, there are plenty of next-generation memory types here already, and more are on the way. Time will tell which ones will prevail—and which ones will not.

Related Stories
Memory Hierarchy Shakeup
3D NAND Market Heats Up
What’s Next For DRAM?
The Future Of Memory Part 3
Part 3: Security, process variation, shortage of other IP at advanced nodes, and too many foundry processes.
The Future Of Memory Part 2
Part 2: The impact of 2.5D and fan-outs on power and performance, and when this technology will go mainstream.
The Future Of Memory Part 1
Part 1: DDR5 spec being defined; new SRAM under development.

  • witeken

    “3D XPoint could be scaled from 19nm to 15nm, but pushing it to 10nm and beyond remains challenging, according to experts.”

    You can call yourself an expert all you want, but how can those so called expert know anything about 3D XPoint if they don’t even know what it is? Or more precisely, they do know what it is (PCM with Switch), but they know nothing about its characteristics since IMFT developed it secretly. The only thing we know is that IMFT last year said thay can scale it (1) by scaling it according to Moore’s Law, (2) by stacking more layers, although that’s not as cost effective and easy as with NAND and (3) by going to 2 bits or more per cell.

  • everest333

    “3D XPoint is supposedly up to 1,000 times faster and has up to 1,000 times greater endurance than NAND, and is 10 times denser than conventional memory.”

    actually, sure they use the upto clause, however, apparently these upto claims are actually way off given the latest claims from intel and its also broken to date…


    “Lets start out with their statements from launch on July 28, 2015. Do pay attention to the numbers, they will matter later, starting with this slide from the launch.”

    “4x is really close to 10x, right

    Hmmm, 10x the density just became 4x the density, not good but that 4x number is backed up by multiple presentations SemiAccurate has seen from IDF2015 to weeks ago. This covers both public and NDA documents, all now stick to a number that is not 10x the density. ”

    “Data does not say 1000x, closer to 10x I think

    Actually if you are Intel, it is easy enough to top a 100x backpedaling if you are talking about broken technologies like Xpoint/Optane. Take a look at the endurance numbers in the launch slide, that would be another 1000x claim, and in the IDF2016 slide, that would be a 3x claim, both vs NAND so direct apples to apples comparisons, no wonder Apple isn’t going to use this. In case you are wondering, 3 goes into 1000 333 times.”


    clearly the only real option is to use the Everspin and Globalfoundries team up for embedded ST-MRAM
    Do you want 22nm SOI devices with embedded memory? options , if you as the end consumer want to actually buy any advanced kit .

    we should really be seeing Globalfoundries/samsung ST-MRAM collaborating for a generic ST-MRAM High Bandwidth Memory v3? and OC samsung Wide I/O HBM configurations that also has a DDR4+ direct wide IO paths to the processor ready for use with mobile UHD2/8K SoC in 2018/19 to be really innovative…

    • everlasting333

      Apple isn’t going to use it because they are not smarter than Google w/o Steve Jobs.

    • witeken

      I won’t deny that Intel has used a lot of hyperbole with 3D XPoint, but SemiAccurate is not a good source. SemiAccurate has a proven tendency to spin the facts to make things look more dramatic. In any case, we can defintely say that Intel screwed up the time to market. They were very, very confident last year that it would hit the market this year, which seems impossible right now. I mean, they literally got a question during Q&A asking if this wasn’t some technology that is still 5 years away, which they denied. But where is it?

      That concerns me most right now. I want to see products which can be reviewed by credible sources, and then we can draw conclusions.

      • everest333

        sure, “SemiAccurate has a proven tendency to spin the facts” but in this case he just uses the existing intel provided facts and statements, so its pretty clear his points stand on this one…

        and again,as relates to my liking of MRAM for a long time now as it can do Sram on a massive scale… the relationship with everspin and their publicly stated and repeated by him with a bit of flare and his tendency to gloat when he’s proven right OC…

        CO SemiAccurate “The short version is that Everspin’s perpendicular magnetic tunnel junction (pMTJ) Spin Torque MRAM is now in production as discrete chips at GF.

        Just for fun Everspin built a PCIe SSD out of those 256Mb chips to show off their capabilities.

        It may be only 1GB in size but this PCIe SSD, effectively an FPGA and memory, was pushing some pretty astounding numbers.

        How does 1.5M IOPS at 4K block size strike you? What if I told you it was 100% writes? “

        • everlasting

          Facts is our major concern. Why either semiaccurate or everspin are your focus instead of fullaccurate and neverspin?

  • XeviousDeathStar ✓ ˢᵐᵃʳᵗ ᵍᵘʸ

    The Marvel and Rambus approach seem best.

    A small and expensive low latency smart Cache that processes small requests immediately and larger requests with a small delay; since large requests delayed a little suffer from latency a smaller percentage of the total time spent for the access.

    Nothing worse than waiting a long time for a short answer, especially if it holds up subsequent requests.

    Lengthy transfers in (read) can be dealt with in the Program by finding something else to do.

    Long writes are only unfortunate when met with a subsequent read (like putting a book on the shelf only to discover it must be refetched due to insufficient foresight).

    The simplest example was a Cache on an HDD, while not ‘smart’ it certainly speed up accesses enormously.

  • Mark LaPedus

    Hi XeviousDeathStar. Thanks for those ideas. Any other promising technologies out there?