As DRAM scaling runs out of steam, vendors begin looking at alternative packaging and new memory types and architectures.
The DRAM business has always been challenging. Over the years, DRAM suppliers have experienced a number of boom and bust cycles in a competitive landscape. But now, the industry faces a cloudy, if not an uncertain, future.
On one front, for example, DRAM vendors face a downturn amid a capacity glut and falling product prices in 2016.
But despite the business challenges, Micron, Samsung and SK Hynix are still moving full speed ahead in the scaling race and hope to break the 20nm barrier this year or next. Yet DRAM scaling is nearing the end, as the technology is expected to run out of steam in the 1xnm node regime.
There are other challenges for DRAM makers and their customers. For some time, DRAMs have been unable to keep up with the bandwidth requirements in today’s systems, prompting the need for a replacement technology. For this, the industry has been developing a range of solutions, such as 3D DRAMs and various next-generation memory types.
The problem? OEMs must continue to use planar DRAM for the foreseeable future, and for good reason. Today, there is still no replacement technology that can exactly match the speed, density and the cost of planar DRAMs.
“There are several technology candidates out there, but none of them are at the point right now where you can really say they will be the clear replacement,” said Charles Slayman, technical leader of engineering at network equipment giant Cisco Systems.
The same holds true for NAND flash, which also continues to scale. “Flash and DRAM will stick around quite a while,” Slayman said. “DRAM is not standing still. It continues to make progress, but not at the breakneck speeds it used to. There will be some gains from the process node shrinks, although those are getting more difficult.”
In any case, OEMs can go down several paths to obtain parts for main memory. To help OEMs, Semiconductor Engineering has taken a look at the status of the following memory technologies—planar DRAM; 3D or stacked DRAM; and the next-generation memory types.
Planar DRAM
In a system, the traditional memory hierarchy is straightforward. SRAM is integrated into the processor for cache. DRAM is used for main memory. And disk drives and solid-state storage drives are used for storage.
Each system type has a different DRAM requirement. Mobile OEMs, for example, want fast, low-power and inexpensive DRAMs. “The networking side looks more at getting a lower latency,” Slayman said. “Cost-per-bit and size of the memory is more important on the server space.”
The big market for DRAM used to be the PC. But as the PC business continues to shrink, the demand for DRAMs has shifted towards cell phones and servers.
But citing a slowdown in the cell-phone and server markets, the DRAM business is projected to reach $40 billion in 2016, down 10% from 2015, said Mike Howard, an analyst with IHS, a market research firm.
The DRAM market could fall by as much as 20% this year, with little or no sign of an upturn appearing on the horizon, Howard said. “It’s not clear if or when the rebound will occur,” he said.
Capital spending in the DRAM segment is projected to fall by 20% to 30% this year, according to other analysts. “DRAM investments will lower as memory pricing continues to weaken,” said Takuji Tada, senior manager of corporate strategy and marketing at KLA-Tencor.
Besides the business issues, there are four main technical challenges with the DRAM—power consumption, bandwidth, latency, and scaling. The DRAM itself is based on a one-transistor, one-capacitor (1T1C) cell structure. The cells are arranged in a rectangular, grid-like pattern.
In simple terms, a voltage is applied to the transistor in the DRAM cell. This, in turn, charges the storage capacitor. Each bit of data is then stored in the capacitor.
Over time, the charge in the capacitor will leak or discharge when the transistor is turned off. So, the stored data in the capacitor must be refreshed every 64 milliseconds, which in turn causes unwanted power consumption in systems.
There are other issues. In a mobile device, the memory bandwidth requirements have increased by a factor of 16 from 2009 to 2014, according to ARM. Yet the latency, or the delay in transmitting data from the processor to the DRAM, has remained relatively constant.
One key latency metric is called tRC. “The processors can run faster and faster,” Cisco’s Slayman said. “But if you look at the tRC over the last two decades, DRAM has only speeded up by a factor of 2X. So, the DRAM isn’t much faster, but the bandwidth (requirements) continue to grow.”
To address the problem, the industry several years ago developed a memory interface technology called double-data-rate (DDR). DDR technology transfers data twice per clock cycle.
For PCs and servers, the industry is making the transition from the DDR3 standard to DDR4 DRAM. DDR4 DRAMs have a data rate up to 25.8-GB/s, which is twice as fast as DDR3 DRAM. In the mobile space, OEMs are moving from LPDDR3 to LPDDR4 DRAMs. LPDDR DRAMs are low-power versions of DRAM. LPDDR4 also has a data rate up to 25.8-GB/s.
Today, DDR4/LPDDR4 DRAMs are ramping up in the market, but the technology may not be fast enough. In fact, some OEMs want faster memory with more bandwidth.
Then what? Now, in some corners, the industry is looking at the next-generation interface technology—DDR5 and LPDDR5. “There is a need for a replacement for DDR4,” Slayman said. “Servers and routers do need a lot of memory. They may need something like a DDR5-type of solution with the highest density as possible.”
The specs and timing for DDR5/LPDDR5 are unclear. And there is a possible chance that DDR5/LPDDR5 may never happen. In fact, DDR4/LPDDR4, or perhaps even DDR5/LPDDR5, may be the end of the road for planar DRAM, and for good reason—the DRAM will soon stop scaling.
Today, Samsung is ramping up the world’s most advanced DRAM—a line of 20nm parts. Micron and SK Hynix are also working on 20nm DRAMs.
Going forward, suppliers hope to scale the DRAM by two or three more generations at the 1xnm regime, which is referred to as 1xnm, 1ynm and 1znm. “1xnm is anything between 16nm to 19nm,” said Er-Xuan Ping, managing director of memory and materials within the Silicon Systems Group at Applied Materials. “1ynm is defined as 14nm to 16nm. 1znm is defined by 12nm to 14nm.”
Scaling the DRAM to 1znm is possible, but going beyond that is unlikely. “The transition from 20nm to 1xnm in DRAM will involve several process and integration challenges,” said Yang Pan, chief technology officer for the Global Products Group at LAM Research.
Economics also plays a role in the equation. “There are one or two generations left, but it’s taking longer each time to go from step to step,” IHS’ Howard said. “As DRAM gets down below 20nm, and into the 15nm range, it starts to bring up some interesting economic questions. For example, when does it make economic sense to stop shrinking the cell? We are really running up to the physical limits of how far we can scale and maintain any signal integrity.”
So how far will DRAM scale? Jeongdong Choe, senior technical fellow at TechInsights, said: “One more scaled generation might be possible such as 18nm. 15nm will be a challenging node.”
All told, the DRAM will run out of steam and will stop scaling sometime within the next decade. “Ten years from now, people won’t invest in the DRAM for the next shrink,” Applied’s Ping said. Then, at that point, DRAM makers will continue to churn out DRAMs, but they will likely be legacy parts based on 1xnm and above geometries, he said.
3D DRAM
Given the uncertainties with planar DRAM, OEMs are also looking at other options, namely stacked memory or 3D DRAMs. 3D DRAM stacks memory on top of each other, which are connected using through-silicon vias (TSVs). 3D DRAM provides fast bandwidth, but the parts are somewhat expensive.
There are several types of 3D DRAMs. Samsung, for one, is selling a DDR4-based DRAM stack. The stack is connected using TSVs and sold in the form of a module.
Another company, Micron, is shipping a 3D DRAM called the Hybrid Memory Cube (HMC). Then, there is another 3D DRAM technology called High Bandwidth Memory (HBM). The first version, dubbed HBM1, is aimed for both high-end and portable systems. The next-generation HBM, called HBM2, initially is targeted for high-end systems.
Samsung recently introduced the world’s first device based on HBM2. Featuring 256-GB/s of bandwidth, Samsung’s 4-gigabyte device consists of four 8-gigabit dies. The dies are stacked and connected using more than 5,000 TSVs.
“By mass producing next-generation HBM2 DRAM, we can contribute much more to the rapid adoption of next-generation (high-performance systems),” said Sewon Chun, senior vice president of memory marketing at Samsung.
Both HMC and HBM are promising, but each technology isn’t exactly a direct replacement for planar DRAM. “You can’t do a direct replacement of a planar DRAM with HBM and HMC without re-architecting your CPU design,” Cisco’s Slayman said.
“HMC is more a serial channel. Now, you have to design your processor around serial (technology),” Slayman said. “It might have more bandwidth than a DRAM DIMM, but it has more latency.”
HMC has some advantages, however. “Any processor supplier, or systems company, could buy the HMC from Micron and put it on the board. Then they could buy the processor and put it on the PCB. So you are putting the DIMMs and the CPUs on the board. That’s the same model as we follow today,” he said.
In contrast, HBM also provides high bandwidth, but the supply chain is more complex. For example, AMD recently rolled out a graphics chip based on a 2.5D technology and HBM. To enable this product, SK Hynix provides the HBM memory. UMC is doing the front-end TSV work, while ASE provides the backend assembly services.
“(HBM) drives a different business model,” Slayman said. “It’s a solution where you kind of have to integrate the processor together with the DRAM in the same package. The question with HBM is who is really the supplier? Is it the DRAM companies that are supplying the HBM stack? Or is it the processor companies that integrate this HBM into their processor package? It gets the processor design companies engaged into more of the package, assembly and test process than what they are used to. But if this all happens, there are clear technical advantages to the HBM solution.”
Next-generation memories
Meanwhile, after numerous delays, a new wave of next-generation, nonvolatile memories are here. Two technologies have been touted as potential DRAM replacements–spin-transfer torque (STT-MRAM) and ReRAM.
“(STT-MRAM is) fast and durable, but it has a long way to go,” Slayman said.
Then, there is 3D XPoint, a ReRAM-like device from Intel and Micron. “Potentially, it could be a replacement for flash, but we don’t know enough about it,” he said.
Of the two technologies, STT-MRAM is the leading candidate to replace DRAM one day. 3D XPoint can do some but not all DRAM functions, according to analysts. “From a physics point of view, STT-MRAM can do DRAM functions, because of its endurance. But the density and cost has to be on par (with DRAM),” Applied’s Ping said. “That will take time. I am confident that STT-MRAM will eventually compete with DRAM.”
Related Stories
1xn DRAM Challenges
An Insider’s Guide To Planar And 3D DRAM
You skipped LPDDR4X, a couple of days ago Mediatek Helio P20 was announced with support for up to 6GB of LPDDR4X. So if LPDDR4X becomes available this year, there should be midrange phones with up to 6GB of it. Makes a big difference as even 4GB was very rare last year bellow high end and users are hungry for more RAM. In PC given the costs today, adding a lot of DRAM is a good way to create value so the year might not be that bleak.
GDDR5x and Wide-IO are also around..
XPoint seems to be aiming to partially or even fully replace DRAM in some phones, the focus being power. With the first gen die at 16GB cost should be problematic even if they find good enough ways to do it.
In glasses, perf might have to take a hit to reach low enough power so just about any NV might have a better chance than DDR.
How times have changed…..in the 80’s and 90’s, DRAM was the process-technology driver and logic/processors followed behind.