New architectures, technology and manufacturing approaches will extend planar memory at least two or three more generations.
At a recent event, Samsung presented a paper that described how the company plans to extend today’s planar DRAMs down to 20nm and beyond.
This is an amazing feat. Until very recently, most engineers believed DRAMs would stop scaling at 20nm or so. Instead, Samsung is ramping up the world’s most advanced DRAMs—a line of 20nm parts—with plans to go even further. Micron and SK Hynix soon will ship similar parts.
Going forward, suppliers hope to extend the DRAM for two or three more generations in the 1xnm node regime. Then, the DRAM may stop scaling around the mid-teens, although these parts will remain relevant for the next decade.
Still, it’s no simple task to extend the DRAM beyond 20nm. For example, Samsung had to come up with some new and innovative technologies. This includes an air-spacer technology as well as an innovative cell layout scheme, dubbed the honeycomb structure (HCS). It also devised a flow using today’s 193nm immersion lithography and multi-patterning.
“For the first time, 20nm DRAM has been developed and fabricated successfully without extreme ultraviolet lithography using the honeycomb structure and the air-spacer technology,” said J.M. Park, a principal engineer at Samsung. “These low-cost and reliable schemes are promising key technologies for the 20nm technology node and beyond.”
Besides new innovations, it will also take money to scale the DRAM. In total, it costs about $600 million to install 10,000 wafer starts per month of leading-edge DRAM capacity, according to Pacific Crest Securities. In comparison, it costs $450 million to install leading-edge NAND capacity, according to the firm.
In any case, DRAM suppliers face some challenges to scale the technology beyond 20nm. To help the industry get ahead of the curve, Semiconductor Engineering has assembled a list of some of the more challenging process steps for advanced DRAM.
What is DRAM?
In today’s systems, the memory/storage hierarchy is straightforward. SRAM is integrated into the processor for cache. DRAM is used for main memory. Disk drives and solid-state storage drives are used for storage.
The DRAM itself is based on a one-transistor, one-capacitor (1T1C) cell structure. The cells are arranged in a rectangular, grid-like array. In simple terms, a voltage is applied to the transistor in the DRAM cell. The voltage is then given a data value. It is then placed on a bit-line. This, in turn, charges the storage capacitor. Each bit of data is then stored in the capacitor.
Over time, the charge in the capacitor will leak or discharge when the transistor is turned off. So, the stored data in the capacitor must be refreshed every 64 milliseconds.
The industry has managed to scale the DRAM for decades. But soon, the DRAM will run out of steam, as it is becoming more difficult to scale the 1T1C cell.
Beyond 20nm, the DRAM is expected to scale two or three iterations in the 1xnm regime, which is referred to as 1xnm, 1ynm and 1znm. “1xnm is anything between 16nm to 19nm. 1ynm is defined as 14nm to 16nm. 1znm is defined by 12nm to 14nm,” said Er-Xuan Ping, managing director of memory and materials within the Silicon Systems Group at Applied Materials.
Scaling the DRAM to 1ynm is possible, but 1znm is uncertain. Going beyond 1znm is remote, based on the current technology and economic trends. “Scaling down beyond 10nm is regarded to be impossible in current planar type DRAM structures,” said Chang Yeol Lee, a research fellow at SK Hynix.
There are other issues. It’s no secret that DRAM suffers from power, latency and bandwidth issues. DRAM isn’t the only problem in the memory/storage hierarchy. For example, the processor can gain access to data through the SRAM at fast speeds. But SRAM is power hungry and takes up too much space. “In SRAM arrays, you have many transistors sitting idle and leaking off the power supply,” said Srinivasa Banna, a fellow and director of advanced device architecture at GlobalFoundries.
Mask challenges
In any case, what are the challenges in the mask shop and the fab? In the DRAM process flow, photomask manufacturing is one of the first steps. As before, lithography determines the mask type and specs.
For patterning, DRAM vendors will extend today’s 193nm immersion and multi-patterning at 20nm and beyond, and for good reason. EUV will likely miss the window at these nodes.
This, however, presents some challenges in the mask shop. “The challenges in mask making for DRAM and logic are different, but equally difficult,” said Leo Pang, chief product officer and executive vice president at D2S.
“Mask patterns for logic are Manhattan patterns, while DRAM patterns contain non-orthogonal angled lines. The current variable shaped beam (VSB) mask writing tools that use Manhattan rectangular shots require significantly more and smaller shots to write angled lines, resulting in much longer write times, and worse CD uniformity and dose margin,” Pang said. “However, DRAM typically relies on one pattern that is repeated a lot. So it becomes less difficult once the pattern is done correctly.”
To solve the problem, there are several solutions. Model-based mask data preparation is one enabler. This utilizes overlapping shots to reduce shot count, which improves the write time by roughly 50%, he said.
“The mask industry has been talking about adopting curvilinear patterns in the mask data preparation and processing flow for two years because of inverse lithography technology. But the adoption and change is still slow,” he said. “The DRAM angled line requirement will give the industry another reason to adopt curvilinear patterns.”
Patterning challenges
After the mask is made, it is shipped to the fab. The mask is placed in a lithography tool. Then, the tool projects light through the mask, which, in turn, patterns the images on a wafer.
Starting at 30nm, DRAM vendors have used optical lithography and multi-patterning. “Memory companies have been doing multi-patterning much longer than logic companies,” said David Abercrombie, advanced physical verification methodology program manager at Mentor Graphics.
“The big difference in memory design is that almost everything is hand crafted. They don’t rely on automation to do the decomposition. They craft the decomposition by hand, relying more on litho simulation than simple design rules to create an optimal decomposition solution,” Abercrombie said. “This allows them to push the technology farther and faster than logic designers. Because of the ability to rely on hand-optimized, multi-patterned solutions, they will be able to be successful at smaller dimensions without EUV. It does require close process integration with the design, however.”
Still, there are two patterning challenges for DRAM. “The first is to ensure the multi-patterning process works to print small/dense cell features, or memory bits, without losing much quality of critical dimension (CD) uniformity and exceeding the misalignment budget,” said David Wang, senior OPC product engineer at Mentor Graphics. “The second area is the aggressive etch process in conjunction with lithographic steps during multiple patterning of random circuitry patterns.”
Others agreed. “Patterning 1xnm half-pitches and contacts without EUV will be surely painful. It requires long and tedious work to hold CD uniformity and to align the quality,” SK Hynix’ Lee said.
DRAM cell
In the flow, the next hard part is the fabrication of the DRAM cell. “The DRAM has two parts. One is the peripheral circuit, which looks like logic. Basically, you have a sensing app and circuitry functions,” Applied’s Ping said. “The other part is the array, where you build the 1T1C memory cell.”
In the DRAM flow, the transistor is made first, followed by the capacitor. Today’s DRAMs use a buried channel array transistor (B-CAT) structure and a bulky finFET active structure.
Making the transistor is hard, but scaling the capacitor is the biggest challenge in DRAM production. The capacitor, which is hollow in the initial phases, is a vertical, cylindrical-like structure.
At each node the capacitor shrinks, which may result in less volume inside the structure. That equates to less cell capacitance in the storage capacitor.
But at each node, the goal among DRAM vendors is to maintain or increase the volume inside the capacitor. To accomplish this feat, DRAM vendors are making the cylindrical structure taller at each generation. For example, Samsung devised a new honeycomb cell layout technology, which is different than the traditional square layout scheme. In the traditional square scheme, the layout consists of a multitude of small rectangles. Each rectangle has four DRAM cells. There are two cells on the top row and two on the bottom, which are aligned and separated by a given pitch.
In Samsung’s HCS, the DRAM cells are also placed in rows. Instead of being aligned in a parallel fashion, the cells are staggered, resembling a honeycomb structure.
The pitch between the two cells is 7.5% larger in HCS, as compared to the square structure, according to Samsung’s Park. In addition, the diameter of a storage capacitor is 11% bigger than that of the square structure. All told, HCS enables a larger and taller capacitor. “With the same dielectric material, the (cell capacitance) of the HCS is 21% larger than that of the square structure,” Park said.
Typically, three mask layers are required to make a HCS at 20nm. In contrast, Samsung has developed a self-aligned double patterning (SADP) scheme that requires only mask layer. In the flow, the HCS shapes are formed using a spacer deposition and etch process. “Then, another spacer deposition defines a single-pitched HCS with triangle-shaped contacts,” Park said.
In the fab, though, this presents some daunting challenges. To fabricate the capacitor, an etch tool etches or drills a vertical hole at precise high aspect ratios. “Right now, it’s at least 40:1,” Applied’s Ping said. “It’s moving to 50:1 to 60:1.”
Then, inside the vertical capacitor structure, a metal-insulator-metal (MIM) material stack is formed. In the stack, the insulator is based on a high-k material, which enables the structure to maintain its capacitance at low leakage.
Today’s MIM stack is sometimes called the ZAZ capacitor. Zirconium dioxide is the current high-k material. “The industry would like to have a better oxide than zirconium oxide in terms of the dielectric constant,” Ping said. “You can improve the dielectric constant. That basically makes extra room for less area.”
The problem? The industry must continue to use the ZAZ capacitor. The next-generation high-k materials are not ready for prime time. They are complex and suffer from high leakage.
Resistivity
At each node, meanwhile, the resistivity is increasing in the DRAM, especially in the bit-line. In a DRAM array, there are bit-lines and word-lines. Bit-lines are connected to the transistor. They are the wires that move data to and from the cell.
“Bit-line capacitance will need to be reduced to maintain an acceptable signal-to-noise ratio,” said Yang Pan, chief technology officer for the Global Products Group at LAM Research. “This will drive adoption of lower-k spacer materials or the use of air-gaps to replace the dielectric between the bit-lines.
“Increasing cell resistance is another issue as the DRAM cell continues to scale. To reduce external resistance and channel resistance, new integration methods such as contact area optimization, vertical gate, and buried metal bit-lines are being explored,” Pan said.
To solve the bit-line resistance problem, Samsung has devised an air-spacer technology. In the flow, the bit-line is formed. The bit-line is sandwiched between a separate sacrificial material and silicon nitride material.
Then, the sacrificial material is removed by using isotropic etching, forming a gap or an air-spacer. As a result, the parasitic bit-line capacitance is reduced by 34%, according to Samsung’s Park.
Process control
As before, the DRAM must undergo various inspection and metrology steps. “In a planar, non-vertical architecture, lateral scaling is the predominant challenge,” said Lena Nicolaides, vice president and general manager for the Swift Division at KLA-Tencor. “Lateral scaling will drive the need for higher-resolution inspection capability to capture smaller yield killer defects.”
Process control also involves other challenges. “The transition from 20nm to 1xnm in DRAM will involve several process and integration challenges. With the greater number of process steps and higher aspect ratio features, reducing process variation and improving process control are absolutely critical to achieve desired CD uniformity and pattern fidelity,” Lam’s Pan said. “Finally, managing processing cost efficiencies is more important than ever.”
Even the DDR4 spec currently don’t have anything greater than 16Gbit.
Mark, this is an excellent article. Do you have plans to update where the industry (Samsung, SKHynix and Micron) are today? Where are they going in the next several years using EUV litho? Do you think a monolithic 32Gb SDRAM is possible?
Hi Rick,
This appeared last year:
DRAM Scaling Challenges Grow
https://semiengineering.com/dram-scaling-challenges-grow/