Quantum Research Bits: Sept. 12

Is silicon the ideal substrate for qubits? It depends who you ask.

popularity

Making Qubits Last Longer

One of the big challenges in quantum computing is extending the lifespan of qubits, called coherence time, long enough to do something useful with them. Research is now focused on how to increase that usable lifetime, and what factors can impact that.

This has led to very different conclusions about whether silicon is a good substrate choice for quantum chips. Researchers at the Supercomputing Quantum Materials and Systems Center (one of five quantum centers run by the U.S. Dept. of Energy) contend that silicon limits the lifespan of qubits through a process known as quantum decoherence.

The challenge has been isolating the cause of this decoherence, because qubits need to operate under nearly perfect conditions. Measurements are disruptive to the quantum process, and any tests need to be done without destroying vital data. To make matters more difficult, all of this needs to be done at extremely low temperatures.

“We are disentangling the system to see how individual sub-components contribute to the decoherence of the qubits,” said Alexander Romanenko, CTO at the DOE’s Fermi National Accelerator Laboratory, according to a research note. “A few years ago, we realized that our [superconducting radio frequency] cavities could be tools to assess microwave losses of these materials with a preciseness of parts-per-billion and above.”

By cooling a niobium superconducting RF cavity to hundreds of a degree above absolute zero, researchers were able to isolate electromagnetic waves. What they found is that with a silicon substrate, the waves dissipated more than 100 times faster that without silicon. They noted that sapphire or some other less lossy material is a better choice for future quantum chips.


Fig. 1: Superconducting quantum processor, comprised of thin films deposited on a silicon substrate. Source: Fermilab

Imec, meanwhile, has developed a CMOS-compatible fabrication technique that uses overlap Josephson junctions, which have two electrodes separated by an insulation layer. Researchers there observe that one of the key problems is atomic-level defects at various interfaces that comprise the junctions, which cause the qubit to lose energy. Traditionally, low defect density was achieved by using double-angle evaporation and lift-off. However, because these techniques are not compatible with large-scale manufacturing, a novel method had to be developed by imec researchers.

The next challenge, though, is to build qubits in sufficient volume, and this is where 300mm CMOS processes can help. “We have demonstrated in our lab superconducting qubits with coherence times exceeding 100 µs and an average single-qubit gate fidelity of 99.94%,” said Tsvetan Ivanov, an imec researcher, in a release. “These results are comparable with state-of-the-art devices, but for the first time have been obtained using CMOS-compatible fabrication techniques, such as state-of-the-art sputtering deposition and subtractive etch, as such avoiding the use of double-angle evaporation and lift-off.”

Whether that can be commercially applied remains to be seen. Another challenge is whether these qubits can be shrunk from millimeters to nanometers.

Quantum ML

How fast can you train an AI system? The answer depends on the amount of training data and the number of processing elements available to fine-tune algorithms in order to increase the accuracy of results. So that got researchers at a number of institutions — Technical University of Munich, Caltech, Los Alamos National Laboratory, and the University of Maryland — thinking about just how fast this could be done using quantum computers.

Their challenge was to figure out a way to improve generalization — making accurate predictions based on unseen data — while using little training data. Results appear to be positive, using this approach to improving time to results with sufficient accuracy. However, there is much work still to be done. But it does address a critical problem in machine learning, which is how much data is necessary for machine learning to be useful, and how much time is needed to produce and process that data.



1 comments

Santosh Kurinec says:

I heard this quote at a conference: “Anyone who has ever bet against silicon has lost”

Leave a Reply


(Note: This name will be displayed publicly)