More nodes and alternative memories are in the works, but schedules remain murky.
DRAM makers are pushing into the next phase of scaling, but they are facing several challenges as the memory technology approaches its physical limit.
DRAM is used for main memory in systems, and today’s most advanced devices are based on roughly 18nm to 15nm processes. The physical limit for DRAM is somewhere around 10nm. There are efforts in R&D to extend the technology, and ultimately to displace it with new memory types.
So far, however, there is no direct replacement. And until a new solution is in place, vendors will continue to scale the DRAM and eke out more performance, albeit in incremental steps at the current 1xnm node regime. And at future nodes, some but not all DRAM makers will make a big transition from traditional lithography to extreme ultraviolet (EUV) lithography for production in the fab.
With or without EUV, DRAM vendors face higher costs and other challenges. Nevertheless, DRAM is a key part of the memory/storage hierarchy in systems. In the first tier of the hierarchy, SRAM is integrated into the processor for fast data access. DRAM, the next tier, is used for main memory. And disk drives and NAND-based solid-state storage drives (SSDs) are used for storage.
The DRAM industry is a huge but tough market. DRAM vendors are in the midst of a downturn amid price pressures in the market. Yet OEMs still want faster DRAMs with more bandwidth to keep pace with the onslaught of new data-intensive applications, such as 5G and machine learning.
In response, DRAM vendors are moving toward a new and faster bandwidth spec. But suppliers are no longer scaling or shrinking the DRAM at the traditional pace, which has been roughly 30% at each node. In fact, DRAM scaling is slowing, which impacts area density and cost. In DRAM, the nodes are designated by the half-pitch of the active or body of the memory cell.
Today, vendors are shipping three advanced DRAM product generations at the 1xnm node regime. These three DRAM generations aren’t given a numerical node designation. The industry refers to them as simply 1xnm, 1ynm and 1znm.
Then, in R&D, vendors have three more scaled generations of DRAM on the roadmap, all at the 1xnm node regime. Those are called 1anm, 1bnm and 1cnm. 1anm DRAMs are slated for 2021 or sooner.
All told, DRAM is making only modest gains in scaling and remain stuck at the 1xnm node regime. But contrary to popular belief, DRAM hasn’t run out of steam. “We’re not done. We don’t think the roadmap is completely closing. But it’s slowing down,” said Debra Bell, senior director of DRAM product engineering at Micron Technology. “We have a clear line of sight for a few years. And then, we have ideas for beyond that. We are in discussions and evaluating that now.”
Still, the industry faces several challenges in scaling this memory. It’s unclear if the DRAM can scale beyond 10nm. Nonetheless, there is a frenetic amount of activity in the arena:
The DRAM landscape
Amid a prolonged downturn in the memory market, worldwide DRAM sales are expected to reach $62 billion in 2019, down from $99.4 billion in 2018, according to IC Insights. The overall IC market is expected to decline by 12.9% in 2019, according to VLSI Research.
Today, though, the foundry business is heating up with memory showing signs of a recovery. “In DRAM, we show a snap back next year,” said Handel Jones, chief executive of IBS. “What’s happening is that the prices are stabilizing.”
Additionally, DRAM content continues to grow in systems, such as servers and smartphones. The average DRAM content for smartphones will increase from 3GB in 2018 to 4GB in 2019, according to Micron. The growth is driven by the explosion of AI, data and video, which requires more memory to help store and transfer the information in systems.
In the DRAM market, meanwhile, Samsung is the leader with a 45.5% share in the second quarter of 2019, followed by SK Hynix (28.7% share) and Micron (20.5%), according to TrendForce. Several Taiwan DRAM vendors have a tiny share.
In 2019, China’s DRAM vendors will enter the market, but they won’t be a factor for some time. One domestic vendor, ChangXin Memory Technology, is expected to ramp up DRAM by year’s end. At some point, Tsinghua Unigroup hopes to enter the DRAM business. Another domestic hopeful, JHICC (also known as Fujian Jinhua Integrated Circuit Co.), went under.
Nonetheless, DRAM is a key building block in systems. DRAM is fast and cheap, but it also have some drawbacks. DRAMs as well as SRAMs are volatile memory technologies, meaning they lose the data when the power is shut off in systems. In comparison, flash memory is nonvolatile, which means it retains data when the system is off.
DRAM itself is based on a one-transistor, one-capacitor (1T1C) memory cell architecture. The data is stored as a charge in the capacitor, which is designated as either “0” or “1.” The transistor controls the access to the data.
“DRAM’s tiny one capacitor-one transistor design makes it ideal for packing numerous memory cells into a small area to achieve high density and high storage capacity. In fact, billions of DRAM cells can be squeezed onto a single memory chip,” explained Alex Yoon, a senior technical director at Lam Research, in a blog.
DRAM cells are organized in a set fashion. “The cells are arranged in a row and have a bit line structure that connects into a memory address called a word line. The address provides a means of identifying a location for data storage, and the word line forms an electrical path allowing all the memory cells on that row to be activated at the same time for storage (write) or retrieval (read),” Yoon said.
Fig. 1: Single memory cell and array. Source: Lam Research
Over time, though, the capacitor will leak or discharge when the transistor is turned off. So the stored data in the capacitor must be refreshed every 64 milliseconds, which consumes power in systems.
It’s also becoming more difficult to scale or shrink the DRAM cell at each node. “With DRAM, geometric lateral scaling continues, but it is slowing and materials innovation will be needed for further scaling as with 3D NAND,” said Gill Lee, managing director of memory technology at Applied Materials, in a blog.
Scaling the capacitor is one roadblock. “In cell capacitor scaling, the aspect ratio is a challenge,” Micron’s Bell said. “Another key scaling challenge for DRAM is charge sharing from the capacitor to the digit line. It’s a combination of your timing specs, how much time you have to move the charge onto the digit line, and then how long can you make the digit lines. All of that factors into scaling and the challenges of scaling.”
DRAM is based on a stacked capacitor architecture, where the capacitor is connected and resides over a recessed channel array transistor structure.
The capacitor is a vertical, cylindrical-like structure. Inside the cylinder, the capacitor incorporates a metal-insulator-metal (MIM) material stack. The insulator is based on a zirconium dioxide high-k material, which enables the structure to maintain its capacitance at low leakage.
In the DRAM flow, the transistor is made first, followed by the capacitor. At each node, the goal is to maintain or increase the volume inside the cylindrical capacitor. But at each node, the capacitor shrinks, which may result in less volume inside the structure. That equates to less cell capacitance in the storage capacitor.
At 20nm, the industry hit a roadblock in capacitor scaling. In response, Samsung developed a new honeycomb capacitor cell layout technology starting at 20nm.
Traditionally, the tiny round capacitor cells are placed side-by-side on the surface of a structure. In contrast, Samsung staggered the cells on the surface, which resembles a honeycomb layout. That enables taller capacitors with larger diameters. With the same dielectric material, the cell capacitance of the honeycomb structure is 21% larger than the previous versions.
To make these structures in the fab, Samsung used a 193nm immersion lithography and self-aligned double-patterning (SADP) process. In the flow, holes are patterned on the surface and then etched. The process is repeated. A metal is deposited, followed by high-k materials using atomic layer deposition (ALD).
Scaling DRAM
Using these and other techniques in the fab, Samsung as well as Micron and SK Hynix scaled the DRAM and moved beyond 20nm.
It hasn’t been easy. For example, patterning the capacitor holes with good alignment is challenging. It’s also difficult to etch the capacitors at high aspect ratios. “Both ALD and dry etching are difficult,” said Jeongdong Choe, an analyst at TechInsights. “But very thin and uniform high-k dielectric deposition is becoming more important on the scaled down DRAM cell array.”
Starting in 2016, vendors moved to the 1xnm node regime, where suppliers had three DRAM products on the roadmap (1xnm, 1ynm and 1znm). Originally, the 1xnm node was defined as a DRAM with 17nm to 19nm geometries, 1ynm was 14nm to 16nm, and 1znm was 11nm to 13nm.
Today, some vendors have relaxed the scaling specs, creating some confusion in the market. Some DRAMs fall within these specs, while others don’t. On top of that, the DRAM cell sizes are slightly different, ranging around 6F2. The cell size equals the feature (F) size times four square.
All told, vendors are moving down the 1xnm node regime in incremental steps, sometimes nanometer by nanometer. Even so, suppliers are still able to reduce the die size to some degree.
In 2016, Samsung shipped the industry’s first 1xnm DRAM, which is an 18nm device. The 8Gbit part is 30% faster and consumes less power than the 2xnm device. It also incorporates the DDR4 interface standard. Double-data-rate (DDR) technology transfers data twice per clock cycle in the device. DDR4 operates up to 3,200Mbps.
Today, meanwhile, DRAM vendors are ramping up devices at the next node — 1ynm. Generally based on 15nm processes and above, 1ynm DRAMs will represent the bulk of the shipments this year. “By the end of this year, 70% of Samsung’s GB volume will be 1ynm,” IBS’ Jones said.
SK Hynix recently introduced a 16Gbit 1ynm DRAM, which doubles the density over the previous 8Gbit version. The device also incorporates the new DDR5 interface standard.
Initially, DDR5 supports 5,200Mbps, about 60% faster than DDR4. DDR5 will support up to 6,400Mbps.
Others also are shipping DDR5 DRAMs. The mobile version is called LPDDR5. DDR4 is still the mainstream technology, although DDR5/LPDDR5 are required for several reasons.
Over the years, processor vendors have moved to multicore CPU architectures. Yet, the memory bandwidth-per-core is barely keeping up.
OEMs want DRAM with faster data transfer rates. That’s where DDR5 fits in. “This is where you can get bandwidth and capacity. We want to be able to scale that with the CPU cores. Think about CPU core count. It has gone up about 8x in the last decade. Obviously, memory has to be in lockstep to keep up with that for overall compute performance,” said Jim Elliott, senior vice president of sales and marketing at Samsung, during a recent presentation.
Meanwhile, the next battleground is taking place at the next node–1znm. Micron was the first vendor to ship a 1znm DRAM, followed by Samsung and SK Hynix. These devices are based on either the DDR4 or DDR5 spec.
Each vendor claims to have the leadership position at 1znm. But not all parts are alike and the scaling specs differ. “There is a lot of marketing going on right now,” IBS’ Jones said.
Beyond 1znm, vendors have three more scaled generations of DRAM on the roadmap (1anm, 1bnm, and 1cnm). Suppliers haven’t revealed the details for these parts, which are still at the 1xnm node regime.
Vendors are taking different paths at 1anm and beyond. At those nodes, the features are smaller with more mask layers. To simplify the process, the DRAM industry for the first time will insert EUV into production.
SK Hynix, for one, plans to use EUV at 1anm, which is due out in 2021. “Samsung completed EUV testing on DRAM at 1z. However, they will not use EUV for 1z mass production. Instead, they may be able to use it for 1a or 1b mass products,” TechInsights’ Choe said.
Using 13.5nm wavelengths, an EUV scanner patterns the features at 13nm resolutions. But EUV is a complex technology that has taken longer than expected to put in production.
Recently, though, Samsung and TSMC have put EUV into production at the 7nm logic node, with 5nm in R&D. DRAM is next in line for EUV. “With EUV, you get better pattern fidelity. The more these mask layers are stacked, the fuzzier the image you get,” said Dan Hutcheson, chief executive of VLSI Research.
Not all are moving to EUV, though. At advanced DRAM nodes, Micron plans to extend 193nm immersion lithography and SADP to 1bnm. For 1cnm, quadruple patterning is in the works.
“We are continuing to evaluate EUV,” Micron’s Bell said. “We do believe that our pitch multiplication process is more than competitive. We don’t see the intercept immediately for EUV.”
This isn’t a big surprise. Micron is known for extending a given lithography technology as long as possible. “They’ve learned how to be extremely frugal with their tools and how to get more life out of them,” VLSI’s Hutcheson said. “They push what they have harder than anybody.”
It will take more than EUV to scale the DRAM. Today’s 1T1C DRAM may extend for another few years, but it may run out of steam at 12nm to 10nm.
So the industry is looking at ways to scale the DRAM beyond 10nm with a 4F2 cell size. “The vertical gate, as well as a capacitorless 1T DRAM cell, are candidates for 4F2,” TechInsights’ Choe said.
There are several challenges here, especially with vertical gate channel transistors, which resemble a 3D-like structure. “The issues are wordline to wordline coupling and bitline to bitline coupling,” said Dongsoo Woo, principal engineer at Samsung in a recent presentation.
DRAM replacements?
For years, meanwhile, the industry has been developing several next-generation memory types that could replace DRAM and flash.
Today, vendors are shipping phase-change memory (PCM), ReRAM and STT-MRAM. Other memory technologies are in R&D.
The next-generation memories are fast, nonvolatile and provide unlimited endurance. But these new memories also rely on exotic materials and complicated switching mechanisms, so they have taken longer to develop. Plus, the new memory types are more expensive.
Each new memory type is different. PCM stores information in the amorphous and crystalline phases. STT-MRAM uses the magnetism of electron spin. ReRAM works by changing the resistance of materials.
Today, PCM and STT-MRAM devices are used in select parts of SSDs. They are used in place of DRAM in some but all not parts of the system. So, it’s safe to say they haven’t exactly replaced DRAM.
“At this time, we cannot see any next-gen type of memory that can directly replace DRAM,” said David Hideo Uriu, product marketing director at UMC. “We do see an SRAM replacement through the use of MRAM. But for the goal of a persistent DRAM replacement, we can only see a ‘hybrid cached’ DRAM/MRAM component.”
STT-MRAM itself is making progress. “MRAM technology will continue to improve and move closer to the goal of persistent memory. MRAM is the closest technology to match the speed and performance of DRAM,” Uriu said. “Given the near DRAM speed of reading data, some applications may be able to use it as a replacement for some DRAM. Again, in its ‘hybrid’ form, DRAM will be used to cache the MRAM storage areas and improve performance to be a DRAM replacement in some applications.”
Conclusion
To be sure, the next-generation memory types are promising. But these products are still in the early days.
Until then, DRAM is alive and well, and it likely will stick around — at least for the foreseeable future. But exactly how long remains a big unknown.
Related Stories
EUV doesn’t work on DRAM patterns even at 2x nm.
“Each new memory type is different. PCM stores information in the amorphous and crystalline phases. STT-MRAM uses the magnetism of electron spin. ReRAM works by changing the resistance of materials. Today, PCM and STT-MRAM devices are used in select parts of SSDs. They are used in place of DRAM in some but all not parts of the system. So, it’s safe to say they haven’t exactly replaced DRAM.”
It may be worth interviewing some of the practitioners who have used PCM and STT-MRAM in the place of SSD (or DRAM), with respect to their experiences & opinions.
Default patterning for Samsung DRAM is crossed spacer patterning (SADP). Hasn’t changed and shouldn’t change for a while.
Great article. Does current DRAM use FinFET or planar FETs for the cell access transistor? And what about the periphery?
DRAM is using round shape (the opposite of an FinFET) to increase gate length instead of width.
Samsung was 1st with such an 3D transistor (Recessed Channel Transistor) for its 90nm technology.
Qimonda did the next step for its 1st stacked capacitors technology, they buried the array transistor and made it of a full metall gate TiN/W. The former poly still used for the periphery transitors is used as bitline in the cell. This reduced the bitline capacity by half so you could double the array size and reduce the amount of sense amps rows, which gives at least 10% density advantage at the technology node.
This is the not so famous Buried Worldline technology which is still standard in DRAM world. Unfortunately it was to late for Qimonda as went out of money to ramp the process. They stick to long on trench capacitor technology which was hard to shrink.