Tougher Memory Choices

Experts at the table, part 2: As demand turns to power instead of performance, questions surface about whether Wide I/O 2 will pick up steam, or whether we will see the advent of DDR5 and LPDDR5.


In part 1 of this roundtable, the participants talked about the investments being made in memory technologies, the role that memories play in system security and the tools support for optimizing memory architecture. Taking part in the conversation are Herbert Gebhart vice president of interface and system solutions in the Memory and Interfaces Division of Rambus, Bernard Murphy, chief technology officer for Atrenta; Patrick Soheili, vice president and general manager for IP Solutions and vice president for business development at eSilicon; and John Koeter, vice president marketing for the solutions group of Synopsys. What follows are excerpts of that conversation.

SE: We have talked about on-chip choices, but what happens when we have to go off chip? DRAM architectures have been driven by the PC and large-scale computing market and not by the SoC market.

Gebhart: That is right. The DRAM, market was driven by the needs of the PC. That defined things such as the data widths, the chunk sizes and the types of accesses that you would make. These are not necessarily optimal for other applications. However, the DRAM economics have made it difficult for them to come out with a lot of different products optimized for different types of SoCs. We are seeing movement in areas such as LPDDR. These have new features that are specific for portable products. We are also seeing some interest in Wide I/O memory now.

Koeter: It has been a sea change. We have been a leading supplier of DDR interfaces for seven or eight years, and in the last two years there has been a lot of change. It used to be all about the PC and now it has flipped and it is all about what mobile needs for things such as LPDDR4.

Soheili: 2.5D is going to be really big, stacking memories using an interposer, is an interesting way to get power out of the system and performance into the system. Bandwidths in terms of terabytes, and reduction in interface issues where much of the power is consumed. So, it is not an answer for everybody but many applications are going to need this kind of performance. Today, the performance ratio is not quite there but there is a lot of activity here.

Murphy: Once again, this is a security issue. People are hacking the external interface of the memory and there are companies pipelining AES cores in the DDR banks so that the data is encrypted between the memory and the processor. If you go to a 2.5D or 3D architecture, then you no longer have to do this.

Gebhart: It is a lot more difficult with Wide I/O, where you would need to get to all of those bits in order to see what is going on.

Koeter: Soheili is a little more bullish than I am. As an IP supplier, the first generation of Wide I/O was not a huge market success. The second generation is coming to market. For mainstream adoption it is all about the cost point, and today the cost point has not gotten there.

Soheili: This is the IP side of the business that we are excited about and not production.

Gebhart: Having wide memory bandwidth needs this type of solution so that you can handle signal integrity efficiency and the monolithic GDDR, high-end interface just isn’t fast enough and uses too much power. In the LPDDR2 days, the interface power was about 50% of the DRAM power, and 25% of the overall memory sub-system power. You could save a lot of power by doing something like Wide I/O once the cost economics are there. At the same time, apps processor companies have become a lot more efficient about how they use the DRAM interface after they realized how power hungry it was. Memory controllers are much better at only turning on the bus at full speed when they absolutely need to.

Soheili: We have mobile customers that we sell die to today, and that would not have happened 5 or 10 years ago. This is just another step in the evolution of that die. Bringing everything closer together and limiting a lot of the signal integrity issues that you otherwise might have had.

SE: Why is the speed of Wide I/O so slow? What is the limiter?

Koeter: That was one of the factors that limited the success of the first generation. People learnt from the experience and the second generation will address that challenge.

Gebhart: It is not 5X faster, but a few X speed improvement. The Z height and power savings are what will drive people towards it. There are 2 billion devices that could use that kind of improved performance today. But they need it to get cheaper before they can afford it. Getting a lower cost out of the interface at the same speed may be more important than higher performance.

Soheili: I am sure that Qualcomm and nVidia would not agree. They keep wanting to make it faster.

Gebhart: Sure they do, but the next 2 billion devices don’t need that.

Koeter: The fastest growing segment of the market is the low-cost smart phone market.

SE: We all obsess about smart phones, but what about the other end of the scale. What does memory look like in an IoT type of device?

Koeter: It depends on the type of IoT. If we exclude the cloud side and focus on things such as wearables, the sweet spot is around 55nm, plus or minus a node. Our fab partners are putting more emphasis into embedded flash. If it is less than a couple of Mbytes of memory, you do need to have any external memory or interfaces.

Soheili: We are seeing new types of NVMs here, for exactly this type of reason—fast access, lower power, smaller die sizes.

Gebhart: We are starting to see interest in ReRAM, which is a new non-volatile memory and is only a few mask layers on top of a standard bulk CMOS process. That will be of great interest to IoT devices. It can sense and store that information until it gets a chance to upload its data and use no memory to do that. There are several candidates out there today. None is quite prime time yet. It always seems as if it is coming and never quite getting there, and it seems as if the materials needed are difficult to use and control.

Soheili: The other thing we see with wearables is that you have to hit a certain form factor, so footprint and aspect ratios suddenly become important. And with power, you really have to go way beyond where we are today. Lots of new challenges and we need architectural ways to respond to some of those challenges.

Murphy: It is an interesting topic because storage has to stick around when the rest of the IoT goes to sleep.

Koeter: Typically, there is an always-on block for which you use a lot of design techniques to make that very low power. But you do have to ping the external world to see if there is anyone out there.

Soheili: It is a matter of risk, as well. The older technologies and nodes have less risk, and so many will stick with SRAM-based systems. But there is a price to pay for that in terms of size, power, access, etc.

Koeter: 40nm is still a pretty aggressive node for NVM and embedded flash applications. While it has some challenges it also has aggressive wafer pricing and so there is no one right answer.

SE: What changes will we see in memories in the next 12 to 18 months?

Koeter: There will be a lot of innovation in embedded flash driven by IoT. The successors to DDR will be very interesting.

Gebhart: This could be hybrid memory cube or high bandwidth memory. Even Wide I/O 2 could see the light of day in 18 months.

Koeter: Will there be a DDR5 or LPDDR5? I haven’t heard a lot of talk about that, but I wouldn’t rule that out.

Soheili: Over the past year or so, it has been all about power. It used to be speed and performance and so innovation has to come from saving more power. The demand to extend battery life will be fierce. This is the case even with servers.

Murphy: Smart phone vendors are now thinking much more aggressively about security and that will impact a lot of things, including memories.