Executive Insight: Charlie Cheng

Kilopass’ CEO talks about the next disruptions in memory and what are the big problems that will be solved.


Charlie Cheng, CEO of Kilopass, sat down with Semiconductor Engineering to talk about issues with current memory types and why the market is ready for disruptive approaches to reduce power and cost.

SE: What’s changing in the memory space?

Cheng: Memory is a very important building block. It’s a foundation and a commodity for a chip and for the system, but if you look at the big picture, the electronics industry has matured, so memory is more or less a treadmill that everyone is on. There are a lot of companies that make a lot of money along the way, but by and large it’s a big boy’s game. The intellectual property, the content and the delivery increasingly are in the hands of semiconductor foundries.

SE: What’s the next thing in this market?

Cheng: The treadmill continues—SRAMDRAM and flash are all in their very creative incarnations. If you look at flash, it’s gone 3D, and before that it was multi-level. If you look at DRAM, the capacitor now looks like a skyscraper in Manhattan. It’s a 100-to-1 ratio. The height is 100 times the width for the capacitor. If you look at SRAM, it’s gone three dimensional along with the transistors in the form of finFET, and it will evolve. This is a clear sign of a treadmill—trying to find any way possible to continue the scaling. But this also is where the opportunity comes for disruption. Because the building technology is in very large manufacturers, whether it’s IDMs or foundries, the guys in charge of the largest budgets in those companies have the power. Their job is all about low risk, fast ramp, low cost. It’s not about innovation. The industry can easily go into a dynamic where the concept of SRAM, DRAM or flash will continue beyond its practical limit because the manufacturing guys have so much invested in making it work.

SE: That’s following the Moore’s Law curve, right? It’s proven and therefore low risk.

Cheng: Yes, and that coupled with the fact that venture capital is drying up on the east side of the Pacific Ocean, manufacturing has left the Bay Area and most of Europe, it definitely creates a void. The manufacturing guys are looking for the lowest risk, and the guys who typically have been the inventors don’t have the human resources and youth or the venture capital investment. It’s becoming an interesting problem.

SE: Kilopass is creating IP that goes into memories. Where do you see the opportunity?

Cheng: We are in the Bay Area, so we still have lots of interesting people coming into our office. We have a lot of access to silicon manufacturing because of our foundry relationships. And we are financially independent thanks to return on early investments. So we’re in a unique position because now we can work on some things complementary to the ones on the treadmill.

SE: Memory has always been a speed and cost game. Where do you fit into this?

Cheng: With a technology disruption, cost is a necessary but insufficient factor. The guys on the treadmill are very good at reducing cost. The market window is too short for a new entrant to just reduce more cost. We have to be doing something different. We’re trying to reduce power by more than 1,000 times.

SE: How do you get there?

Cheng: I can’t talk in detail yet, but it you look at how thin the oxides are these days and the amount of leakage from transistors, finFETs are a good counterbalance, but they don’t go far enough. We have a good solution for that.

SE: How about performance?

Cheng: If you look at modern-day processors, you can’t find a Cortex-A processor, for example, with fewer than 15 stages of pipelines. One of the reasons is that the memory speed is not keeping up. The way SRAM works is that the bit cell current has to overcome the capacitance and resistance to receive the signal that says the data is a one or a zero. As metal gets thinner, your ability to transfer that amount is limited. But more importantly, the SRAM itself isn’t able to generate enough current. If you think about a memory array at 2 kilobytes, that’s 16,000 bits. The speed of 16,000 bits depends on the slowest bit, not the average bit. When you come to a distribution, because of the variability of the large transistors these days, it tends to be pretty slow. When you get to level 2 and level 3 cache, the size gets pretty big. It’s not uncommon these days for level 3 cache to get up to 16 megabytes, or 16 million of these little things. Speed is determined by the slowest bit.

SE: It’s the weakest link in the chain, right?

Cheng: Yes. But power is different. Power is based on the average because they all dissipate independently. But speed involves the last signal to arrive in order to not miss it.

SE: So if you had equal bits, you’d get significantly better performance out of cache?

Cheng: Yes, that’s right. We’re subject to the same limitations of thin metals and a lot of similar issues in metal architectures. But the key to performance is to solve the variability. That’s what we’re working on. SRAM, DRAM and embedded flash all of their own sets of problems. We’re also very closely watching the deal between Intel and Micron. That could be a game changer.

SE: Why?

Cheng: Memory companies going into the mobile space are stacking DRAM and flash together and creating a compelling solution for that. But Intel-Micron have developed a new memory bit cell, which they’ve been working on for a long time. It improves speed and endurance by 1,000 times. If you improve speed by 1,000 times on a non-volatile memory, you are running at about the same speed as DRAM. Coming out of the processor, why would you want to have a lot of DRAM dissipating a lot of power when you could go straight to non-volatile memory and get the same speed? The other part is flash memory wears out. You wouldn’t want your working memory to wear out what’s already vulnerable. Right now flash memory wears out after about 10^3, or 1,000 times. If you improve it by 1,000 times—that’s 10^6, or about 1 million times. A real DRAM has an endurance of 10^20, so now we’re 10^14 off. That’s the one to watch. When your computer’s flash memory wears out—and it does—it is no longer able to store and retrieve spare rows, columns or sectors to move the data to a safe place. It’s spending all its time trying to move data to where it is within the safeguard.

SE: This is SSD?

Cheng: Yes, and it’s now soldered to the motherboard. But when it wears out and the SSD is no longer any good, the processor and DRAM are still pristine. So if the life of that computer is three years, it means the processor and DRAM are overdesigned. Maybe the acceptable wear for a notebook computer is 10^13. If the new memory technology from Micron-Intel is 10^6, it’s not that far off. So you can buy more flash and eliminate DRAM from the bill of material. That changes the hierarchy of the computing system for the very first time.

SE: What does that mean for Kilopass?

Cheng: We don’t work on non-volatile memory for data storage the way flash memory does. We have to find some tangential business. There is a data hierarchy, whether it’s in a small sensor or in the cloud. How can the memory hierarchy be more efficient and lower power? These days, power is the number one problem. The smart phone is no longer the focus of improved battery life. You can get a phone that lasts three days between charges. The ones that are most challenged are the sensors and the cloud servers. If you need a power line, that’s a cost. If you need to change the battery, that’s a cost. If you need to service it, that’s a cost. On the other side, there’s a data center in Omaha, Nebraska, that is about the size of four football fields. It consumes an enormous amount of power. A lot of the Midwest is chewing up power because of that, but only about 20% to 25% is used for computing. Another 25% is used for cooling, and then you have 50% in power distribution from the source all the way to the cooling device. The delivery efficiency is about 50%. So anywhere you can save power, you essentially are getting a 4X return. It’s a massive problem. We’re trying to figure out how to make things more efficient and lower power.

SE: How much can memory affect this?

Cheng: It’s a huge play. If you think about 100-gigabit Ethernet, the pipe comes in 4 x 28 gigabits, and dumps a lot of data into the chip. For the SoC to know where to forward the packets to, it has to format and then forward them. Assuming the transistor doesn’t run infinitely fast, it will have to buffer. The larger the memory, the slower your logic can be, and the lower the power consumption. But that math only works if the SRAM doesn’t consume more power than the transistors. Otherwise you have to use as many transistors as possible and process as fast as possible, because if the memory uses too much power, that’s the culprit. With 100 gigabits coming in, it’s a lot of data per second and you have to buffer it. SRAM right now is the biggest bottleneck for intelligent Ethernet. From the NIC (network interface card) to the switch to the routing over 100-gigabit Ethernet with a 4K TV, that drives a lot of memory needs.

SE: So what are you working on?

Cheng: I can tell you what we’re not working on. We’re not working on planar scaling because that’s what all the equipment companies are very good at. We’re not working on new materials. That’s what the foundries are very good at. New materials require a deep understanding of physics, and until you understand those physics you can’t design anything. But you don’t understand physics until you try a lot of things, and we don’t have that luxury because we don’t have a fab. We only work on known physics—silicon, silicon germanium—things that are very well documented. We do work on memory. It’s a pretty focused market. The interesting revelation I had when I arrived here eight years after the company started is that Kilopass is one of the few companies that actually develops memory outside of a foundry or IDM, and it developed something that’s relatively unique. Unlike a floating gate or a charge-trapping device that got started as an EPROM (erasable programmable read-only memory) and moved its way into flash, this is unique.

SE: But there’s big potential behind this, right?

Cheng: Yes. It’s a very exciting way of thinking about the market. We taped out a test chip in January. The silicon came back in May. To our surprise it worked the first time. It also has a little bit more applicability than we originally thought.