An Insider’s Guide To Planar And 3D DRAM

Cisco Systems looks at DRAM and the trade-offs.

popularity

Semiconductor Engineering sat down to talk about planar DRAMs, 3D DRAMs, scaling and systems design with Charles Slayman, technical leader of engineering at network equipment giant Cisco Systems. What follows are excerpts of that conversation.

SE: What types of DRAM do network equipment OEMs look at or buy these days?

Slayman: When we look at DRAM, we look at it for networking applications and we look at it for server applications. Those are slightly different in terms of their requirements. Networking looks more at getting lower latency. Cost-per-bit and size of memory is more important in the server space.

SE: What are some of the challenges that OEMs face today?

Slayman: We have to be so much more creative now to squeeze out the extra bit of performance, reliability or features from designs now. It’s no longer simply just to shrink the die and it’s going to be faster and cheaper.

SE: What else?

Slayman: What’s happening in memory is the same thing that is happening everywhere else with Moore’s Law. You don’t get everything for free anymore. You have to make trade-offs. The trade-offs that are most important depend on if you are building a supercomputer, blade server or core router. And, of course, in the mobile space, the trade-offs depend on if you are building tablets, cell phones and these IoT devices.

SE: Let’s first talk about DRAM and scaling. How far will planar DRAM scale?

Slayman: DRAM vendors are gung ho and marching down into sub-20nm. They will maybe even get two generations in, like a 1xnm and 1ynm. A 1znm? I am not sure about that.

SE: How long will the DRAM last?

Slayman: It’s possible you can see the planar DRAM roadmap moving forward until the next decade, into the mid-2020s.

SE: Do DRAM suppliers have an advantage in terms of patterning over the logic vendors?

Slayman: The advantage that they have over the logic guys is there is only a couple of layers of the DRAM that need to be really tight. Unlike logic, where there is a whole bunch of tight pitch layers, the DRAM guys can afford to continue to push the planar technology shrinks down below 20nm and still be cost-effective.

SE: Is multi-patterning cost-effective for DRAM?

Slayman: It’s painful, but it’s still cost effective. The cost for them to do that more than pays off in the added density

SE: Let’s move to the devices. The industry is moving from DRAMs based on the DDR3 interface standard to DDR4. What are some of the design implications?

Slayman: We will move from DDR3 to DDR4, because DDR4 has better performance features. The die shrinks that happen within the DDR4 don’t really improve DDR4 performance. What they do is move the device to the higher speed bins. So the die shrinks end up being advantageous to the DRAM suppliers, because they get better yields at the higher performance bins. Then, they can charge a premium.

SE: Where does cost fit in?

Slayman: A lot of people think PC vendors want low-cost DRAM and server guys can pay a little more. The opposite is true. There are so many DRAMs in a server design. The DRAM dominates the cost of the server. So, the server guys are extremely sensitive to DRAM cost. They want reliability, but they also want low cost.

SE: What’s after DDR4?

Slayman: It looks like now perhaps there will be a DDR5 to replace DDR4 at some point. There was a question about what would happen there. That may indeed come into fruition. So, the DRAM roadmap for servers then continues on with DDR5. And, of course, there will be stacking of those DDR5 die. So, some of the density increase will come from the process node. Some of increase in density will come from multi-die packaging using TSV technology.

SE: Why do we need DDR5?

Slayman: Servers and routers do need a lot of memory. They may need to be something like a DDR5-type of solution with the highest density as possible.

SE: Where is the industry in terms of DDR5?

Slayman: It’s still in the talking stages.

SE: Let’s talk about 3D DRAM. There is the Hybrid Memory Cube (HMC) and High Bandwidth Memory (HBM). Where do they fit in?

Slayman: They are taking different approaches. Each of the technologies will have their place.

SE: What about HMC?

Slayman: HMC provides huge bandwidth, but it comes at an added cost because of the serial links. Maybe at the high-end, where performance is necessary, HMC will fit into that space. If you went to the HMC, that link is no longer this wide parallel path. It’s a serial link. It might have more bandwidth than a DRAM DIMM, but it has more latency.

SE: What about HBM?

Slayman: HBM looks like a DIMM-on-a-chip. Instead of getting 72 bits off of a DIMM by ganging up eight DRAMs in parallel, you are putting all those I/Os into this HBM stack. But the problem is that this stack will not solder directly onto a PC board. So with that stack, you somehow have to integrate it into your CPU package design. And that’s a whole new business model.

SE: How will HBM work?

Slayman: AMD is the pioneer in getting HBM into production. They have designed a graphics chip around this first-generation HBM. They will buy the HBM stack from Hynix. Then, they will put that into their graphics processor package. In theory, it should have fast speeds. Now, the memory is sitting next to the graphics processor.

SE: What are the challenges with that?

Slayman: The supply chain infrastructure has to be developed.

SE: So will 3D DRAMs solve the memory bandwidth bottleneck or not?

Slayman: TSV will help feed DDR DRAM as well as HBM and HMC. It’s a matter of time until the TSV ecosystem works itself out to where it is a viable technology. Then, that will start to take some of the load off the process shrinks, and the Moore’s Law problems, we are facing.



Leave a Reply


(Note: This name will be displayed publicly)