Cryogenic CMOS Becomes Cool

But that doesn’t mean it’s going to be mainstream anytime soon.

popularity

Cryogenic CMOS is a technology on the cusp, promising higher performance and lower power with no change in fabrication technology. The question now is whether it becomes viable and mainstream.

Technologies often appear to be just on the horizon, not quite making it, but never too far out of sight. That’s usually because some issue plagues it, and the incentive is not big enough to solve the problem. If a solution were to be found, possibly from another application domain, it suddenly becomes a very viable technology. Add to that an increasing struggle to get viable scaling from Moore’s Law, and the interest in cryogenic CMOS starts to rise, as seen by a very rapid increase in research papers and funding in the technology.

Suman Datta, professor at the University of Notre Dame, moderated a panel session at DAC 2022 where he talked about the benefits of cooling CMOS. “When cooling transistors from 300 Kelvin to 77 Kelvin (-321°F/-196°C), the transistors improve in almost all respects (see figure 1). Leakage goes down, the suppression slope improves, mobility increases, drive current increases. So really, there are no negative things.”

Fig. 1. Improved transistor characteristics. Source: Suman Datta/University of Notre Dame

Fig. 1. Improved transistor characteristics. Source: Suman Datta/University of Notre Dame

Datta also points out some less obvious gains, such as increased reliability, and similar benefits in the characteristics of wires where resistivity decreases, while capacitance barely changes. All of this has been known since the 1980s, when cooling a supercomputer in liquid nitrogen was shown to double its performance. The big question is whether the gains are worth the cost of cooling? “There are tremendous costs associated with cooling, and a lot of logistical issues,” says Datta.

So why is it receiving increased attention? The simple answer is that there are an increasing number of application areas for it, including quantum computing.

The first panelist, Effendi Leobandung from IBM Research, started off by providing an explanation of Dennard’s theory (see figure 2.) “You can scale your voltage by a certain factor, called ‘a’ because you scaled the dimensions by same factor. This also reduces the capacitance and the transistor diffusion, so you don’t have as much short channel as before. Then you increase the dopant so that at the end, you maintain the same power density. Everything went well, until about maybe 65nm.”

Fig. 2. Dennard's theory on CMOS scaling. Source: Effendi Leobandung (IBM)

Fig. 2. Dennard’s theory on CMOS scaling. Source: Effendi Leobandung (IBM)

At that point, the voltage could not be scaled anymore because the threshold voltage couldn’t be changed, and it becomes more difficult to increase performance. “Now we have a short channel problem,” says Leobandung. “The suppression slope is proportional to temperature (kT). When the temperature goes down, the subthreshold goes down. And when that goes down, you can reduce the voltage, which allows you to scale the gate oxide and then scale the gate length. This allows advanced technologies to continue to scale.”

This is where it becomes a lot more technical, because in order to get the required characteristics at low temperatures, the transistors have to be built differently. “The number one requirement is that the threshold voltage has to be negative at room temperature,” says Leobandung. “One way to do that, is to add more dopant. However, in advanced devices like finFETs or gate-all-around, there basically is no dopant, and it has lower impact on threshold voltage. There are other ways to modify threshold slope that involve the dielectric interface, and this is not modulated by temperature. This is through geometric-induced variation.”

Leobandung went on to talk about the some of the developments that are being looked at for finFETs and gate-all-around devices, including new materials.

Ravi Pillarisetty, senior device engineer for Intel, talked about some of the ongoing work with RF ICs where the PDKs were modified for low-temperature operation. Once that was done, the chips could be provide signals for control and readout on the qubits in a quantum computer. “We are trying to battle the cooling costs,” he said. “In some sense, we don’t necessarily care so much about the cooling costs, because this is opening up a whole new paradigm of compute, attacking unsolved problems that standard HPC cannot solve. But at the same time, we are limited in terms of the raw cooling that these systems can provide.”

Pillarisetty provided a thermodynamic analysis of the cooling costs and said that while it is possible to build cooling systems with fairly high efficiency on a small scale, it becomes a lot more problematic when you think about doing it for a data center. To break even, Intel calculated devices would need to operate at about 0.3V, but at that point new problems surface. “You really have to start worrying about things like variability, noise margins, and headroom. Is variation going to kill you? We see variation increases at low temperature. It’s not something where sigma is getting better. It’s always getting worse in basically every place we’ve seen.”

It has not been easy sailing. “As we start to scale up our test chips and just handle several hundred of these RF signals, it’s becoming a very complex EDA problem,” says Pillarisetty. “In fact, our ability to scale up these chips is actually EDA limited. The EDA requirements for CPU, GPU, things that are standard digital design, are very different from what we want here. Place and route needs to work for circuits operating at 20 GHz. We also really need to update the physics-based models to understand local heating on the device elements.”

Next up was Jamil Kawa, fellow and group director for R&D at Synopsys. For the past five years he has been working on Josephson junction-based superconductor electronics. “I’ve been doing physical silicon measurements down to 77 Kelvin on a sub-7nm CMOS technology, and I have drawn some practical conclusions. At 77 Kelvin, liquid nitrogen CMOS, as shown from our TCAD re-engineering of the device, there is a 7X power advantage at iso-frequency, or a 1.4X performance advantage in iso-power compared to room temperature. If you account for cooling costs, which you have to, that number drops to 4X. The optimal efficiency is somewhere between 100 K and 150 K.”

To gain all the benefits, Synopsys had to re-engineer the device so the same technology node that was operating at 0.8 or 0.9 volts can now operate at 0.35 to 0.4 volts. These re-engineered devices are not able to operate at room temperature because they are always “on” and very leaky. Another issue they have to deal with is that below 200K, the P devices become much stronger than N devices. That means that cell libraries need to be redesigned.

Kawa provided some diagrams that show the impact of variability as temperature decreases (see figure 3.) He particularly drew attention to the variability changes in the off-state current. In the lower left, the blue shows leakage at 300 Kelvin, the purple is at 77 Kelvin.

Fig. 3. Impact of temperature change on transistors. Source: Synopsys

Fig. 3. Impact of temperature change on transistors. Source: Synopsys

Kawa talked about the results from a test chip that contained ultra Vt, standard Vt, and low Vt devices, that were purposely engineered for the slow-slow and fast-fast corners. “Performance, for all practical purposes, saturates at around 200 Kelvin, even though gains in leakage continues almost linearly.”

With the completion of introductory remarks, Datta asks the panelists if we should be looking at cryo-CMOS as a performance booster or an enabler of a new computing paradigm?

Kawa claimed a big bias for CMOS and Josephson junction-based superconductor technology. “We did a lot of automation at Synopsys to where we were able to certify our RTL-to-GDS flow for a 64-bit ARC processor with six blocks of memory. The area it consumed was prohibitive, making it non-viable. CMOS is leaps and bounds ahead of other technologies in terms of density, and there is still geometric scaling to go. Having said that, I do believe strongly that we should explore every possible avenue, especially for quantum computing applications.”

Leobandung felt that in the short term, it was unlikely that any new technologies would be considered. “Our livelihood depends on being able to scale technology,” he said. “In order to scale the markets you need to have a successful flow, and I don’t see any other devices viably operating at low temperature.”

Pillarisetty had a longer-term view. “I am bullish on the long term, basically looking at cryogenic beyond CMOS. We are going to need solutions in both spaces – one in the more traditional space, but at the same time, we have been working on beyond CMOS for a long time. Nothing beats silicon. We need to keep finding new ways to use silicon. But the important thing is to really understand the variation problem. We have to think about Pareto variation in the production technology. We almost have to do a full process run and measure variation on hundreds of wafers to really even look at all that data to see what’s happening. That has to be done in silicon technology, and fully understood. Then we can try to understand what’s augmenting variation at low temperature and work on that.”

Many new technologies happen because money drives it. Datta asked if the industry could exploit the technology to the point where it could achieve 100X improvement in the energy-delay product at the performance level. This is the level at which he considered it to be a game changer.

Pillarisetty said that this would be a very tough goal. “In traditional CMOS, the variation is going to work against you. And then there are electromagnetic effects. There’s no T on any of the Maxwell equations. Even with Josephson junction FETS, or other sorts of novel superconducting type of devices, they still have a Vt. There’s still some threshold, and you still have variation.”

“Am I getting the picture now?” asked Datta. “You’re saying there’s no way I’m going to get a 0.3 volts cryo-CMOS with a conventional transistor device? The only way you can make this 100X gain in energy delays is not going to come from cold CMOS, but it’s going to come from some exotic unknown technology.”

Leobandung said that we may have to look at the technology differently to get there. “One possible reason for the variation is that the technology is just not optimized for variation. It is a relatively high voltage technology. I’ve been working in the fab for a long time, and I’m amazed sometimes what we can do about controlling what happened in the past. Once a problem is known, it is possible to control your fabric. And while we may not get all the way there, there may be a point where it is perhaps good enough. There’s still room to optimize.”

An audience member noted that while the panelists had mentioned some of the issues at the transistor level, none of them commented on higher levels in a development flow. He asked what was happening with place-and-route, and things like synthesis.

“When we tested our RTL to GDS flow, we did not have to make major changes to the overall infrastructure,” Kawa responded. “Timing closure, and timing-driven place-and-route were challenging. Having said that, I’ll be very blunt. If you want to take such technology in the future and make it a mainstream technology, you need to invest in redesigning things from scratch. You need to do synthesis with superconductors in mind, not start with existing tools.”

Kawa explained that while tools may need to be developed, the industry can still build upon 35 to 40 years of experience with CMOS, which is not the case if the industry were to switch to some exotic technology. “If you have a technology that is based on majority logic, you cannot take an engine that’s optimized for CMOS logic (NAND, NOR) and expect it to give you optimal mapping for majority gates,” he said. “It is along those lines that engines need to be rewritten, dedicated to new technologies.”

Another audience member questioned whether the direction of quantum computing would make an impact.

“If you think about different qubit types, whether it’s a solid-state qubit, whether it’s going to be spin qubits, or quantum-dot based, or even maybe photonic integrated circuits, they would use the same kind of CMOS control chip, and they would have to have the same kind of RF engineering and power delivery on the chip,” Pillarisetty said. “If you look at a photonic IC, that can be at higher temperature, but there are other issues there. With ion trap systems, everything’s at room temperature.”

A final question went back to the variation issue and its impact on memory. “While the transistors may look okay at 25mV, when you use enough of them to make a memory, it is not going to provide the necessary sigma,” said Pillarisetty. “To make an SRAM, I would need to crank it up to 250mV to 300mV to meet noise margins. If I have to crank the voltage that much, then there’s no point doing it. Is there some smarter way to do this, where you can maintain phase coherence in the circuit? That may allow you to get around the noise margins. You can ask, is that quantum computing?”

Kawa added that cooling DRAM below 40K has some benefits, because the refresh rate is lower thanks to the lower leakage.

Conclusion
Cryogenic CMOS appear to remain on the cusp. It is clear that it could provide significant benefit, but to get all of the potential gains will take further investment. As of today, nobody is willing to be the front runner. Quantum computing may result in cheaper cooling, and it may provide a new and unique application for cryo-CMOS that may result in some of the issues becoming better understood and then solved. But none of this will be happening soon.



1 comments

Mike Cormack says:

Excellent article. Back in the early 90’s, I was at Sequent Computer Systems, we characterized all the devices in our machines at temps as low as -40C. Suppliers enjoyed the data, when it was in their favor. Pentium CPU’s we’re very solid. We found a lot of memory cell issues, mostly single bit errors in DRAM’s. DRAM’s at the time we’re very sensitive to low temps. We presented our data to the DRAM Supplier’s. In time, the biggest DRAM companies cleaned up their fab defects and their DRAM products benefited from their effort. Some of our custom ASIC’s had some issues at low temps and once those were resolved, we had a very solid product.

Leave a Reply


(Note: This name will be displayed publicly)