The Future Of Memory (Part 2)

Experts at the table, part 2: The impact of 2.5D and fan-outs on power and performance, and when this technology will go mainstream.

popularity

Semiconductor Engineering sat down to discuss future memory with Frank Ferro, senior director of product management for memory and interface IP at Rambus; Marc Greenberg, director of product marketing at Synopsys; and Lisa Minwell, eSilicon‘s senior director of IP marketing. What follows are excerpts of that conversation. To view part one of this roundtable, click here.

SE: Where is high-bandwidth memory being used? Is that largely for the cloud?

Ferro: Yes. Whenever you’re doing 2.5D, there are performance advantages, but the cost is still high for now.

Minwell: Yes, for now. But over time there are different approaches in packaging we’re looking at. By the year 2020, it might be more cost-effective for customers.

SE: Such as organic interposers?

Minwell: Yes, or a build-up. And fan-out packaging. In the way we’re looking at it, and where we are today with interposer technology and how long it’s taken us to get to where we are, you’re looking at 2020 before we finally get to that point.

SE: How much of the performance and power improvement is coming from the processor versus the memory?

Ferro: The move from DDR3 to DDR4 provided 25% power savings. The core voltage of the DRAM and the interface voltage dropped. Those kinds of efficiencies are there. HBM, because you’re actually running at a slower rate—you’re running at 2GHz for HBM2—that will help. You have a lot of signals, but per signal the power is lower. Fundamentally, though, there isn’t anything that’s new and low power. We’re scaling what exists now.

Greenberg: We’re seeing with HBM much less power versus DRAM.

SE: Because of a bigger pipe?

Ferro: There are multiple things. An important one is the distance. If you’re driving a channel through a silicon interposer versus a DIMM, with the DIMM you need more power to get across the board. With an interposer it’s a shorter distance.

Minwell: On a board that’s 4 to 6 millimeters.

SE: A key issue that everyone is grappling with right now is more data. Memory plays a big role here. You need to get the data in and out, but you need to do it efficiently. Where do you see the memory architectures changing, if at all?

Greenberg: If you’re going to save power, it’s not a one-shot deal. You look at whether you can dynamically change frequency. Can you turn on and off the I/O features that allow you to reach high frequency? Can you change the termination, keep the buses short, keep the memory close, and control what’s going out to external memory and what isn’t? There are also low-power modes for memories. You have to take everything. It’s the sum of all steps.

Ferro: If you look at DDR, there isn’t much power savings going on except with LPDDR. As we move into servers, you’re looking at ways to save power and adopt some of those LPDDR features into DDR going forward.

Minwell: When you have multiple SerDes on chip and you’re looking at multiple data paths on package to DRAM, the amount of power that consumes is astronomical. Where you’re looking for a lot of bandwidth, that’s where you get these huge savings by going to HBM. You can cut the power significantly—in some cases down to 1.5 pico-joules. There are also some new architectures out there that are looking to communicate with external memory chips like HBM, but they are not necessarily using the same HBM PHY approach. They require the memory providers themselves to modify the control logic to be able to communicate with the chip. It significantly reduces the amount of power in the connectivity from the ASIC to the memory stack. There is a JEDEC group, 16, which is working on that.

SE: We’ve been looking at this largely from the standpoint of the data center. What happens in smaller devices, like a phone or an IoT device? What changes there?

Minwell: What is happening is those technologies are being held up by flash and non-volatile memory. They’re stuck at the older technology. The rest of the communications are down at the 10nm level. The struggle is being able to manage all of the different pieces of technology, from analog to external memory, and being able to bring them down to the smaller technologies. That’s another example where packaging can help. That’s the concept of having tiles—analog IP and flash that is proven at an older technology communicating with a chip that may be on the forefront of technology with much more integration—which now can be packaged together.

Ferro: Right now LPDDR is more expensive with DDR. Traditionally you could plot it. DDR comes out, LP always lags a bit from a cost standpoint. There wasn’t a memory designed just for the mobile market. There was DDR, and then it was adapted for mobile. That’s still the way it is today. They’re taking the highest-volume memory and building off of that. Latency doesn’t change much, and it probably will be the same for DDR5. I don’t see a mobile memory being developed.

Greenberg: In the IoT space, one of the choices may be 8 gigabits of 3200-rate memory. And you’re going to use that with a thermostat? So what do you do instead? People are looking back at much older technology from 10 to 20 years ago like parallel NOR devices, standalone SRAM. But those devices are getting so old no one really wants to use them. So people are looking at putting all that stuff onto a serial peripheral interface—an SPI bus. These cost much less than a dollar. They’re in 8-pin packages. They’re very good memory technology for that type of IoT application.

SE: We have a number of inflection points in the market, and people are looking at what’s right for a specific application.

Minwell: Yes. There is one more type we didn’t discuss, which is MRAM. That’s finally getting adopted and making its way into manufacturing. That will play a role. It’s going to target these mobile systems. This was first thought of at Motorola in 2000. Now, 16 years later, it’s finally manufacturable. It has been a long road.

SE: Is it because the cost has dropped for MRAM, or other memory technologies are no longer appropriate?

Minwell: It’s a combination of both. There is a market need and the product is reliable. You can build the technology, but you have to prove reliability. That’s been a large burden for MRAM.

SE: What does MRAM bring to the table we haven’t seen before?

Ferro: It gives you better latency compared with flash. It’s better than flash but worse than DRAM.

SE: And 3D XPoint is somewhere in there, as well, right?

Greenberg: Yes, between DRAM and flash.

SE: Will we ever have a universal memory?

Greenberg: People talk about XPoint being the universal memory, but we don’t have a commercial product yet. It could be…maybe.

Minwell: I would second the maybe.

Greenberg: You still don’t have the endurance of other technologies. There is nothing out there with DRAM type of endurance other than DRAM. That’s one of the big issues. And there is nothing out there with the cost of NAND. Speed-wise, it’s really hard to get anything faster than DRAM. You can go to SRAM, but it’s more expensive.

SE: There has been a lot of talk about moving the data closer to the memory. How much will that save in terms of power, performance and cost?

Ferro: There is a fundamental bottleneck between the CPU and memory, and what we’re trying to do is open up that bus. So how do you get higher performance and higher efficiency out of that bus? We’re not requiring a new memory or CPU, but how do we get more performance out of what’s there? And how do we make that bus more efficient?

Greenberg: The concept of having logic alongside memory is still very new. If you have to xor something with something else, taking that data all the way to DRAM and sending it back is a long way for that memory to go. So if you could send instructions to DRAM that said, ‘Take the memory in location X, and xor it with this value I’m going to give you now,’ then I only have to move one set of data in one direction. And I didn’t have to bring all the way to the CPU and back again. Logically we can say that makes sense. But how do we create an architecture that really can take advantage of that? That’s a question.

Minwell: It’s working with the DRAM providers, as well. Obviously they need to be able to support those types of interfaces and potentially bring logic on board, as well.

Greenberg: Think of this as x = x + 1. Do I want to take the data, bring it all the way to the CPU to add 1 to value? Maybe not, but we’re not there yet.

Related Stories
Future Of Memory Part 1
Part 1: DDR5 spec being defined; new SRAM under development.
The Future Of Memory (Part 3)
Security, process variation, shortage of other IP at advanced nodes, and too many foundry processes.
What’s Next For DRAM?
As DRAM scaling runs out of steam, vendors begin looking at alternative packaging and new memory types and architectures.
New Memory Approaches And Issues
What comes after DRAM and SRAM? Maybe more of the same, but architected differently.
ReRAM Gains Steam
New memory finds a lucrative niche between other existing memory types as competition grows.



Leave a Reply


(Note: This name will be displayed publicly)