Experts at the table, part 3: Security, process variation, shortage of other IP at advanced nodes, and too many foundry processes.
Semiconductor Engineering sat down to discuss future memory with Frank Ferro, senior director of product management for memory and interface IP at Rambus; Marc Greenberg, director of product marketing at Synopsys; and Lisa Minwell, eSilicon’s senior director of IP marketing. What follows are excerpts of that conversation. To view part 1, click here. Part 2 is here.
SE: What’s the next big market for advancements in memory?
Minwell: Deep learning. They’re looking at how to do all of these algorithmic modifications of the data and assessing the data to get the right answer for the high learning. That’s the area that will drive that.
Ferro: But it still comes down to knowledge of the application. When you have better systems that know the data types you’re using, you’re going to get much more efficiency out of your memory. Even looking back at my early days in processing, we still look at things from a scalar level when we program. It’s one thread, and we really don’t use multiple threads effectively. There are some programmers who have three or four threads running simultaneously, but it’s not intuitive. So we’re locked into this CPU to memory architecture from a programming standpoint. That hasn’t changed. We’re just trying to build efficiencies around that model.
SE: We’ve discussed SRAM and DRAM. The next piece, moving out from the processor, is storage, and the big question there is how robust it is. What is better than a spinning disk?
Ferro: Nothing beats it from a cost per bit right now, and it probably won’t for a long time. Even if you look at data centers, the percentage of flash there is small.
Minwell: I agree.
Greenberg: It’s still a cost function. It’s almost like endurance comes second. You can deal with endurance several different ways. You can do that by moving stuff around in memory and coming back and correcting the bits and those types of techniques. Nobody has stood up and said they need more reliable flash, but everyone says they are concerned about the cost.
Ferro: In order to adopt it you need the speed. That requires lower latency. And then there is the cost.
Minwell: It also takes time to prove the reliability. You have to study it over time, although there are certain models to accelerate that.
SE: Is one type of memory more secure than another?
Greenberg: On-chip memory is always more secure than off-chip. If you send the data off-chip, then it can be attacked in some way.
Minwell: When it comes to places where you’re storing data, like on ROM, if you’re concerned about security it’s programmed on the diffusion level rather than on metal. That’s another area to control security.
Greenberg: With DRAM, we’ve been concerned about the DRAM bus being a way for people to get into different parts of the system. When you talk about the world of IoT connected devices, and what we assume to be secure devices like credit card terminals, for example, if you can get into one of those and decide you want to own the CPU, the DRAM bus is one entry point. Trying to secure that bus is important.
SE: Years ago we heard concerns about quantum effects in memory. Is that a problem?
Minwell: With finFET technology right now we’re not having that problem. But we are seeing some variability within the die. We thought that would get better with finFET technology, but we’re still grappling with that.
SE: Is that 16/14 or 10nm?
Minwell: It’s at 10nm. We’re seeing higher variability than what we saw at 14nm. But in general, looking at the strength of the bit cell and the read/write current, we still have sufficient margin for the SRAM. So we haven’t had a problem there yet.
Greenberg: Memory manufacturers are seeing it as they go to larger multi-gigabit dies.
Minwell: That basically starts right below 20nm.
SE: Is it a consistent line where if you go to 14nm and 10nm, then it varies proportionately?
Minwell: We don’t know. But we do know that right below 20nm, you were expecting a 30% cost reduction in scaling and area from technology to technology, and it has fallen off that curve.
SE: As we move beyond 16/14nm, leakage begins creeping up again. How does that affect the reliability of memory?
Minwell: We are having to qualify the memory at five and six sigma. At the older technologies, you could get by with three-sigma models. Then there are things like Monte Carlo analyses that have to be done, and then you have to correlate that to silicon at the back end.
Greenberg: There isn’t more characterization for us than in the past, but we do have to do it.
Ferro: The nodes haven’t changed the characterization/qualification process.
Minwell: That’s from an incoming perspective. Outgoing is different.
SE: So this is a signal integrity problem?
Ferro: Yes, most of the interfaces we have are mixed signal. We don’t necessarily see a lot of damage. There are definitely signal integrity challenges, though.
SE: Where will the big shifts be in memory over the next three to five years?
Ferro: 2.5D is probably the biggest one now. HBM is moving forward pretty rapidly. It’s focused on the data center and high-performance computing type of applications at the moment.
Minwell: And graphics.
Ferro: That’s the biggest shift. DDR5 will start to take shape in that time frame, too.
Minwell: One thing we haven’t talked about is finding IP. For GP DDR5, for example, the IP isn’t there. A lot of times with advanced technologies, when you’re looking for IP to put on an ASIC so you can interface with the external technology, we’re limited by what is available on the shelf from IP providers.
Ferro: Yes, and as the speeds are going up, it’s starting to thin the crowds a bit. At 3.2 GHz, reliable DDR is becoming more of a challenge. You can be a little sloppier when you’re below 2.1 GHz. That’s also creating challenges. And beyond that, you have to focus on the system and the signal integrity of the entire channel. It’s more of a system problem than just a PHY problem. You have to design for memory.
Greenberg: As an IP vendor, you have to make IP for a lot of different people. And there are costs of doing that. You have to characterize it. We have disaggregation happening. We have several different DDR technologies. At the same time, the foundries have got more than 10 different flavors of 28nm. And there are several at 16/14, and then 10 and 7. There is a lot to do.
Minwell: The number of foundries are dwindling as far as the advanced process nodes. But it seems like it’s also taking them a while to get their process flavor correct.
Greenberg: And a lot of them will make competitive advances. So one will come along with a process, and then the other will come back and make changes to their process to make it better. As an IP vendor, it’s hard to keep up with.
Future Of Memory Part 1
Experts at the table, part 1: DDR5 spec being defined; new SRAM under development.
What’s Next For DRAM
As DRAM scaling runs out of steam, vendors begin looking at alternative packaging and new memory types and architectures.
New Memory Approaches And Issues
What comes after DRAM and SRAM? Maybe more of the same, but architected differently.