Memory Design At 16/14nm

What you know may no longer be the best information. Such is the case when thinking about memories in a finFET world.

popularity

As we get older the memory may start to fade, but that is not a viable option if we are talking about embedded memory. Chips contain increasing amounts of memory, and for many designs memory consumes more than half of the total chip area.

“At 28nm we saw a few people with greater than 400Mbits of memory on chip,” says Prasad Saggurti, product marketing manager for Embedded Memory IP at Synopsys. “At 16nm, a greater number of chips have 500Mbit and some are talking about more than 1Gbit of memory on chip.”

Many forces are causing the amount of memory to increase. “The simple way to increase the throughput in a graphics engine is to increase the pipeline,” says Anand Iyer, director of product marketing for Calypto. “But increasing the pipeline means the memory needs to be increased to avoid stalls.” And of course, larger chips contain more processors that want to run greater amounts of software. But when was the last time that software engineers were concerned about reducing their memory footprint?

The migration to 16/14nm with finFETs means that some aspects of memory design have to be rethought. “A finFET transistor has drawbacks in conventional design-techniques,” points out Hem Hingarh, vice president of engineering for Synapse Design. “The quantized transistor-width limits flexible bit-cell design using conventional techniques, so it reduces the design space and increases the failure probabilities with un-optimized bit-cell sizing.”

Design is a set of tradeoffs. chief executive officer for Kilopass Technology, says that in addition to the quantization problems, “some of the more novel bit-cells require different side-wall engineering, split-gate, and for SRAM, the N versus P ratios will be difficult to port to finFET.” But let’s not forget the advantages. “The finFET, on the other hand, is more stable and therefore has slightly less random dopant fluctuation than 20nm planar devices. From a variation perspective, it’s slightly better.”

Hingarh adds to the advantages, noting “the finFET has better control over the channel due to several gates acting on that channel. This reduces the source-drain leakage current and suppresses the short-channel effects.”

Saggurti notes that “going to 16nm finFET meant that we had to learn finFET design and while not inherently difficult, it is just different.” As an example, Saggurti says, “In CMOS we were taught not to have more than four transistors stacked together, so a four-input NAND gate was the maximum. Now, with finFET, a stack is actually a good thing, so a gate with more than four inputs may be encouraged.”

Pushing the envelope
Memory design does not follow the same rules as logic. “The SRAM bit-cell typically utilizes smaller than minimum size transistors in order to realize higher density,” says Hingarh. While this is usually performed by the fab it requires careful design and is only the start of pushing things to the limit.

One of the first challenges with the design of new bit-cells is accurate characterization. “Accuracy is critical for characterization, verification and signoff, mainly due to reduced Vdd and the impact of process variations,” points out Bruce McGaughy, chief technology officer and senior vice president of engineering for ProPlus Design Solutions. “Device characteristics and physical behavior become even more complicated at these new nodes, so the margins that are left for designers are shrinking.”

But memory designers continue to push the limits. “With finFET we get a one-time large drop in leakage, which means that while we still worry about designing in a low-leakage manner, dynamic power has become a bigger issue,” Saggurti explains. Dynamic power is CV²f, where C is the capacitance, V is the voltage and f is the frequency. “Voltage is the key and the bit-cells that come from the foundries are not designed to run at low enough voltages. We lower the voltage even further and then use things such as read and write assist circuitry.”

Yield, test and repair
As the memories get larger, test times increase. “Memory test has scaled nicely with process nodes,” says Kilopass’ Cheng. “The reason is that, as regular structured circuits, there are ways to scale and make the testing easier and faster.”

“GalPat may be a very thorough algorithm for test, but today the runs times would be prohibitive,” points out Saggurti. “Alternative algorithms are used, some developed by Synopsys and others that are defined by the customer. There are also a new set of finFET specific algorithms that are provided.”

There are some additional test concerns with finFETs. “FinFETs represent a fundamental change to the underlying structure of the transistor,” explains Hingarh. “Test and failure analysis improvements are of particular importance as finFET critical dimensions are, for the first time, significantly smaller than the underlying node size. This has led to growing concern over increased defect levels as well as increased yield challenges.”

Fault models and detection techniques developed for planar transistors are not sufficient to cover finFET related defects in embedded memories. “Fabs usually provide defect data but designers have to implement an optimal set of test and repair algorithms,” says Hingarh. “Early results showed that finFET-based memories are more prone to dynamic faults.”

Should we expect lower yields? “I hesitate to put causality between yield and geometry,” says Saggurti. “It is a new process and this will initially have yield issues because of the complexity of the process. Once it stabilizes and the foundry understands how to control the process and the variability, then yield will go up. Good defect density can be achieved even in finFET designs. The foundries are doing a good job getting this under control.”

Hingarh points out that “during early design and production phases, we have to provide larger redundancy and repair capabilities (such as cumulative multi corner and in-system repair) and efficient volume diagnostics and yield analyses capabilities.”

“Memory redundancy is definitely needed and expanding,” says Joseph Reynick, director of design-for-test solutions at eSilicon. “This includes both hard and soft repair support and programmable BiST algorithms after tapeout.”

But there are a couple of key questions that need to be asked, notes Saggurti. One is whether engineers are testing everything. A second related question is whether they can repair everything. “Are they hard defects from process failures or soft errors? In previous generations we would have seen single bit failures which could be fixed with redundant columns but now we are seeing more than single bit failures and we need both column and row redundancy.”

Cheng adds that “memory repair is a fairly common strategy, and in advanced technologies, a 5% repair area reserve is not uncommon. Furthermore, ECC is a must because while repair can fix hard failures, they are not that common. What is more common are soft errors caused by environment disturbances such as alpha particles.”

Automotive and other high reliability systems are imposing additional demands on soft errors. “Today, the methodology to evaluate memories is ad hoc,” admits Calypto’s Iyer. “Redundancy does play a big part, depending on the end markets that these designs serve. For example, some automotive standards call for deliberate redundancies to be built into the design for safety.”

The area taken up by embedded memory is one of the growing forces behind architecture change. “Solutions such as HBM2 (second generation high-bandwidth memory) provide a very large bandwidth and capacity at considerably lower power, system cost and latency compared to other external memories in ASICs,” points out Javier DeLaCruz, senior director for product strategy at eSilicon Corporation. “Luckily, the additional HBM2 memory in the package does not have the negative yield impact that the additional embedded memory or additional DDR memory would have given that the dies are able to share the repair functionality.”

Concluding remarks
How much does a typical designer need to know about memories and finFETs? “When we deliver embedded memory IP, we have done the worry for the designers,” says Saggurti. “We make sure that they are designed well and that they can be manufactured well and there are no inherent problems related to designing with these memories. We worry so that chip designers don’t have to.”

But just as Synopsys and other companies learned to take advantage of all of the differences that finFETs make in the design process, the same applies to much of the logic, as well. “We relearned new layout styles, new design styles and there are things that are different than in the past,” concludes Saggurti.



Leave a Reply


(Note: This name will be displayed publicly)