Executive Insight: Charlie Cheng

Kilopass’ CEO talks about how to cut the capacitor in DRAM and why that’s important in the data center.

popularity

, CEO of Kilopass Technology, sat down with Semiconductor Engineering to talk about the limitations of DRAM, how to get around them, and who’s likely to do that. What follows are excerpts of that discussion.

SE: What are the top market segments from a memory standpoint?

Cheng: The top one is still mobile, which is interesting because 15 years ago there wasn’t a mobile segment. Today it’s the largest one, but the market has peaked.

SE: How do you define a mobile device? Is a phone, or can it be a car or a robot?

Cheng: We typically think about mobile as LP-DDR. And because of form factor requirements in this market, you can only get a limited number of dies into a mobile device.

SE: So what comes after that?

Cheng: The data center. The problem there is that it’s compute-throughput intensive. DRAM is not very good with refresh. When you refresh the bank is busy, which means DRAM is not available for anything else. On top of that, DRAM is stuck at 20nm and it doesn’t appear to be able to go down in size, unlike NAND and SoCs. At 20nm, the depth to isolate transistors from each other is about 300nm. The capacitor is 1,000nm. And the plate is another 300 to 400nm. So the entire capacitor structure between metal one and metal two is 1,500nm.

picture1
Fig. 1: The enormous DRAM capacitor.

SE: What else?

Cheng: China is a huge market, but that’s a whole different issue. China is trying to grow semiconductor production. Their goal is 20% compound annual growth. Right now, the most successful semiconductor manufacturer in China is SMIC, which is growing about 12%. The only way for China to get to 20% growth is to sell commodity products—x86 processors, DRAM, flash memory and passive components. You can sell a ton of those. Of those, DRAM is the most likely. But to get to that growth rate, they need a breakthrough.

SE: So where will the next breakthrough in DRAM come from? Internal development?

Cheng: Breakthroughs are usually from startups. If you look at mobile, wireless and flash memory, it was startups at that time that came up with the breakthroughs. Qualcomm, at the time of CDMA was 10 years old and was building CB radios for truck drivers. It was the same protocol they used in mobile phones. But if you look at where DRAM is made, Korea does not have a startup culture. China does have a bit of a startup culture, but the market is still dominated by state-owned companies—oil, gas, power, water, chemicals, steel. They’re busy trying to sell those companies to private companies.

SE: So where does Kilopass fit into this?

Cheng: On a relatively generic CMOS logic wafer we’re adding three more implants to create two interlocking bipolar transistors. The zeroes and ones are kept in the middle junction, which is where the bipolars are locked in. So it’s the feedback loop of the bipolars that is keeping the data. In the OTP (one-time programmable) business there are Sidense, eMemory, Microchip, and an impressive list of OTP vendors from Taiwan. Almost every memory startup in memory wants to do a new material—thin film, magnetic, carbon nanotube, resistive, phase change.

SE: Weren’t you looking at that in the past?

Cheng: Yes, in 2012 we tried to buy a fab in Japan. It didn’t work out. But in December of that year, we decided to go forward with a new memory. Our criteria were that it couldn’t use new materials, it had to be vertical because if it was horizontal we would be competing with every other memory company, and it had to be complex and compounded in the vertical direction. The idea was that we would be able to squeeze as much on a vertical strip as possible.

SE: What was the rational for vertical versus horizontal?

Cheng: The footprint of SRAM is 6 transistors, but in reality it’s more like 2.5. The foundry will give you a set of rules, but they will use much tighter rules for their own SRAM. So for us to compete against SRAM, we’d have to squeeze everything we want in 1 transistor size. It’s not possible. It has to be vertical and it has to use existing material.

SE: Is that to utilize existing manufacturing processes?

Cheng: Yes, and that’s why we came up with a thyristor. The thyristor for surge protection and latch-up is the physical portion of this. Once it turns on, it has an exponential current curve. It’s really fast. But it doesn’t take very long to get down to low current, which is what we need for DRAM. It also requires very low energy to lock the bit because of the feedback loop. In addition, the whole transistor becomes more efficient at high temperature, so it doesn’t take as much energy to lock the memory. So we have a neutral temperature curve.

SE: Where do you see the need? Is it the classic DRAM market?

Cheng: Yes, the data center. At 85°s C (185° F), for DRAM the curve is flat. And because the on-stage has an exponential curve, the on-state and off-state has a 10^8 separation. It’s very easy to tell off and on. Magnetic RAM has a separation of less than 500.

screen-shot-2016-11-02-at-7-57-50-pm
Fig. 2: What can be cut using a vertical layer thyristor.

SE: How about power?

Cheng: In the mobile phone you’ll save power using this approach. But in the data center you save a lot more power.

SE: In a data center, it’s a line item in the budget, right?

Cheng: Yes, it’s actually the largest line item. People scrap servers not based on how long it has been in use or how much faster the next CPU is, but by how much power they save.

SE: So how much is memory a function of energy versus logic?

Cheng: DRAM is about a quarter of the total.

SE: If you don’t have to refresh the data as often, how much does that save you?

Cheng: About 8% to 10%. But on 400 watts, that’s 32 watts of power and 32 watts of cooling, which is 64 watts of power, and 128 watts at the source because the transmission efficiency from where the power comes into the data center to the server is about 50%. So if you save 100 watts per server, the payback period is less than 3 months. If systems companies were to introduce a new server with those kinds of savings, that’s huge savings. The memory also gets you another 20% in bandwidth. If the cache is busy, because DRAM’s latency is so long, the CPU does slow down. But it’s idle, not doing work. That power is all wasted.

SE: How big is your cell?

Cheng: DRAM is 6F². We do 4.5F². On top of that, the reason the capacitor in DRAM is so big is it doesn’t fit squarely on the transistor. That’s why it’s hard for them to scale below 20nm. The transistor can scale, but the capacitor on top doesn’t get smaller. You just introduce more leakage. DRAM has a roadmap problem because of that. The capacitor also uses charge sharing, so the amount of capacitance you can discharge limits how long the word-line and how deep the bit-line is. You charge the word-line and the current density has to be low enough not to blow up with 2,000 1-microamp bits. It has to be able to go down to register as a zero or a one within 5 nanoseconds.It’s a 200 MHz macro, so it’s not very fast. But the difference is that with 700 kilobits we can pass so much peripheral logic and still be more efficient.

SE: So it’s how you pack in the data?

Cheng: Yes. And the number of wafer steps is lower. Another benefit is that you don’t have to license DRAM, because you can get that from IBM. But to get the DRAM capacitor you have to license it from one of the Big 3 DRAM vendors. Nobody else does it. A vertical layered thyristor allows China to have an open market in DRAM. But in practice China will probably start at 32nm. They won’t be able to catch up.

SE: Any impact on security?

Cheng: It’s a dynamic memory. As a scratchpad memory it will be secure because the data is stored a couple layers down rather than on the surface. DRAM isn’t where people worry about security, though. The real worry is in L2 cache. That’s the last point of coherency. The most current data in a CPU is actually on L2 cache.

SE: So now what do you do with this?

Cheng: The first market will be PCs and servers. The second will include mobile. We will have to partner to get there because none of the big guys use standard memory. Embedded will be hard. Robotics and industrial are the new embedded. But in the old embedded world, those will run for a long time.

Related Stories
New Memory Approaches And Issues
What comes after DRAM and SRAM? Maybe more of the same, but architected differently.



Leave a Reply


(Note: This name will be displayed publicly)