Memory Directions Uncertain

Experts at the Table, part 1: The world of memories is changing rapidly but it is not yet clear which approaches will become mainstream. Do we need a new memory?

popularity

Semiconductor Engineering sat down with a panel of experts to find out what is happening in world of memories. Taking part in the discussion are , chief executive officer at Kilopass Technology; Navraj Nandra, senior director of marketing for Analog/Mixed signal IP, embedded memories and logic libraries at Synopsys; Scott Jacobson, business development within sales and marketing at Cadence; and Frank Ferro, senior director for product development in the interfaces and memory division of Rambus. What follows are excerpts of that conversation.

SE: What are the issues and problems that you or your customers are having today when it comes to memory?

Ferro: Memory interfaces and high-speed signal interfacing are the biggest issues. There are lots of discussions around high-speed SerDes interfaces. Everyone wants to talk about the challenges of moving from 11 to 28Gb/S. Also there are lots of different processes nodes. Do you want that in 16 or 28nm or perhaps using FD-SOI? This means that there are a lot of things we have to get involved in.

Jacobson: I come from the verification side and the verification of memory models. Our customers face challenges daily because of the plethora of new opportunities in the 3D memory space. Jumping from 2D to 3D is multi-faceted – there are technical issues, process issues, business and management issues, risk management. All of these are presenting so many challenges and choices that customers are saying, ‘We see the field ahead of us but give us a direction. My application does this, I am doing these kinds of algorithms, I need this kind of parallelism and bandwidth – give me some guidance about what the tradeoffs are.’

Nandra: We are having interesting discussions about what is happening after DDR4. It is a single-ended interface running at very high speeds—3200Mb/S—and there is a strong opinion that it will be difficult to build anything after that because the technical challenges are mounting. DDR4 is for PC or server type applications. The mobile market has also pushed the bandwidth requirement and we are actually seeing LPDDR requiring higher speeds than DDR. People are asking us for LPDDR4 3200 before they are asking for DDR4 3200. There are also competing technologies such as hybrid memory cube, high bandwidth memory, Wide I/O 1 and 2. They all require TSV technology for 2.5 and 3D technology.

Cheng: The biggest problem on the server side is the DIMM interface. It is a treadmill to go from 11 to 28 to 50Gb/S. If you can stack the memory, there is some messiness, but it is possible. The problem in server and data center architectures is that there is a DIMM in the middle and it is a huge kluge. DIMM is the reason why DRAM does not have a role in the future if you extrapolate long enough. 3D and SerDes are just a patch and DDR4 and 5 is a patch. The problem is that DRAM is too slow. In mobile the problem looks like a milk farm. Cows take a long time to milk, and then you attempt to deliver it using a lot of Ferraris as fast as you can to get the data to the processor. This doesn’t work. The fundamental problem is that the cows take a significant amount of time. You can develop better cars, electric cars that use less energy, but basically we need a new memory, one that is large enough, on die. I disagree that mobile needs a new interface. There is no zero power Ferrari. The industry just needs new memory.

Ferro: I agree that there are a lot of technologies today that are patches. One conclusion is that embedded DRAM solves a lot of these problems. If you can put it all on chip you have the speed, but nobody can do embedded DRAM. Wide I/O fills in between what we have today and embedded DRAM, but then we have the physical design challenges of the TSV and manufacturing challenges. DRAM has been the architecture for server architectures, but it you look at mobile it has different requirements and nobody is architecting memory for this environment. You feel like you have to take DRAM, manufactured at the cheapest node, and this is driven by the server/PC market, and you have to build around that. If we designed a DRAM for the mobile market it could be different. We have been living with the latency and performance of the server DRAM.

Jacobson: There is something else that has changed in the past few years—the division of needs between the various market segments and their growth rates. There is the feature phone segment of China and India versus smart phones and the ultimate high-capacity, high-performance world we live in here. If you look at the consumption of memory into those segments, they are each driving different choices. Sometimes, they are going to older technologies for risk management reasons. This is forcing a different point of view when talking about supporting the kinds of memory that the end customer needs. You have to look at the market first and then go into the other technologies.

SE: As chips have gotten larger, we have brought more memory on-chip. We seem to be at an inflection point. Today, memory takes up 50% of the chip area, 50% of the power and has a huge impact on system performance. Is there a change in mentality that memory has become so important that it has to be considered a major part of the design?

Nandra: We have a lot of experience building memory compilers from 250nm down to 10nm, which we are working on now. There is a bifurcation in the different types of memory compilers required for high density and high speed. Consider the innovations that have to be put into the compilers, into the bit-cell technology, into FinFIT technology, into TSMC 16nm. Memory compilers are building devices running down to 0.5 volts and this enables massive amounts of SRAM to be embedded onto the chip. There are technology challenges in supporting all of the memory functions, such as read and write circuitry. There is lots of innovation going on in compiler technology.

Jacobson: A side effect of the lower feature nodes and FinFETs has been amplified in the signal integrity space. It is all about chasing maximum performance or maintaining your cost. With higher speeds and smaller dimensions the tolerances are closer to threshold, and signal integrity is becoming a driver for customers and the choices that they make. Can they slow things down and gain a cost advantage? Can they lower risk and still manage to get more memory bandwidth? In 3D memories you have aspects of yield and TSVs that have to be dealt with. These are business management decisions about which memory technologies should be pursued from an economics point of view. What are the risks, what are the rewards? There are so many choices, each with different challenges. The tall skinny world is driving DDR approaches to be faster and faster versus short and parallel approaches, such as Wide I/O 2, where you have wider, slower interfaces. These are corner points in a cost/performance tradeoff for a particular type of design and the risks you are willing to take.

Cheng: Memory compilers really concentrate on SRAM and not DRAM. So when we talk about 50% of area and power, we are talking about the largest blocks of memory and the two largest blocks are indeed built very differently. Consider a processor’s L3 cache. It is the largest memory block on the die, but it only gets used 2% of the time. If usage gets to 5%, it is because your L1 and L2 caches are horrible. Someone should be fired. So if it is 1% or 2% of the time, it means that it has to wake up quickly and go to sleep quickly, and during the time it is awake you don’t care about how much power it burns. The other kind of memory is the network buffer. For a 10G Ethernet port, the amount of buffer you need is in the order of 20MB and this either needs to be on the die or at least a good fraction of that. The problem is that it is always on because the packets keep coming in. The optimizations for that are very different. It is not about wake and sleep. It is about efficient processing. Memory compilers don’t really go beyond 8 or 16 Mbits, so both of these are really custom memories. There is a supply problem here because there aren’t many companies that build it as a one-off custom design. ASIC and chip companies end up having to do this a lot.

Nandra: You are starting to see some innovations in high-density memories. The power management and control area is also interesting. The idea of reducing the supply voltage to keep power down has got to the point where you can’t reduce it any further, so then you have to look at the activity levels and managing latency and wake up time. You see those algorithms starting to appear in the compilers.

Jacobson: This is where we are also seeing alignment in customer concerns. Depending upon who you talk to, they have different sets of concerns. This is based on the application space and not all memories are right for all applications. We don’t have a one to many approaches anymore. Geography creates another cut, so we have a matrix of requirements.



Leave a Reply


(Note: This name will be displayed publicly)