System Bits: Oct. 25

Quantum 3D wiring; special-purpose computer; inexact computing.

popularity

Scalable quantum computers
In what they say is a significant step towards to the realization of a scalable quantum computer, researchers from the Institute for Quantum Computing (IQC) at the University of Waterloo led the development of a new extensible wiring technique capable of controlling superconducting quantum bits.

The quantum socket is a wiring method that uses 3D based on spring-loaded pins to address individual qubits, they said, and which connects classical electronics with quantum circuits, extendable far beyond current limits, from one to possibly a few thousand qubits.

Researchers from the Institute for Quantum Computing (IQC) at the University of Waterloo led the development of a quantum socket, representing a significant step towards to the realization of a scalable quantum computer. (Source: University of Waterloo)

Researchers from the Institute for Quantum Computing (IQC) at the University of Waterloo led the development of a quantum socket, representing a significant step towards to the realization of a scalable quantum computer. (Source: University of Waterloo)

Of note is one promising implementation of a scalable quantum computing architecture that uses a superconducting qubit, similar to the electronic circuits currently found in a classical computer, characterized by two states, 0 and 1. Quantum mechanics makes it possible to prepare the qubit in superposition states, meaning that the qubit can be in states 0 and 1 at the same time. To initialize the qubit in the 0 state, superconducting qubits are brought down to temperatures close to -273 degrees Celsius inside a cryostat, or dilution refrigerator.

The team used microwave pulses to control and measure superconducting qubits, typically sent from dedicated sources and pulse generators through a network of cables connecting the qubits in the cryostat’s cold environment to the room-temperature electronics. The network of cables required to access the qubits inside the cryostat is a complex infrastructure and until recently has presented a barrier to scaling the quantum computing architecture, they explained.

All wire components in the quantum socket are specifically designed to operate at very low temperatures and perform well in the microwave range required to manipulate the qubits, and the researchers have been able to use it to control superconducting devices, which is one of the many critical steps necessary for the development of extensible quantum computing technologies.

Special-purpose computer
Stanford University researchers have made a new type of computer that can solve problems that are a challenge for traditional computers by combining optical and electronic technologies.

They reminded that processing power of standard computers is likely to reach its maximum in the next 10 to 25 years, and even at this maximum power, traditional computers won’t be able to handle a particular class of problem that involves combining variables to come up with many possible answers, and looking for the best solution.

Post-doctoral scholar Peter McMahon, left, and visiting researcher Alireza Marandi examine a prototype of a new type of light-based computer. (Source: Stanford University)

Post-doctoral scholar Peter McMahon, left, and visiting researcher Alireza Marandi examine a prototype of a new type of light-based computer. (Source: Stanford University)

The team suggests this approach could get around the impending processing constraint and solve those problems. If it can be scaled up, this non-traditional computer could save costs by finding more optimal solutions to problems that have an incredibly high number of possible solutions.

Specifically, there is a special type of problem – called a combinatorial optimization problem – that traditional computers find difficult to solve, even approximately. An example is what’s known as the ‘traveling salesman’ problem, wherein a salesman has to visit a specific set of cities, each only once, and return to the first city, and the salesman wants to take the most efficient route possible. This problem may seem simple but the number of possible routes increases extremely rapidly as cities are added, and this underlies why the problem is difficult to solve.

The Stanford team has built what’s called an Ising machine, named for a mathematical model of magnetism. The machine acts like a reprogrammable network of artificial magnets where each magnet only points up or down and, like a real magnetic system, it is expected to tend toward operating at low energy.

They noted that the theory is that, if the connections among a network of magnets can be programmed to represent the problem at hand, once they settle on the optimal, low-energy directions they should face, the solution can be derived from their final state. In the case of the traveling salesman, each artificial magnet in the Ising machine represents the position of a city in a particular path.

Rather than using magnets on a grid, the Stanford team used a special kind of laser system, known as a degenerate optical parametric oscillator, that, when turned on, will represent an upward- or downward-pointing ‘spin.’ Pulses of the laser represent a city’s position in a path the salesman could take, they noted.

The latest Stanford Ising machine shows that a drastically more affordable and practical version could be made by replacing the controllable optical delays with a digital electronic circuit. The circuit emulates the optical connections among the pulses in order to program the problem and the laser system still solves it.

Interestingly, the researchers pointed out that nearly all of the materials used to make this machine are off-the-shelf elements that are already used for telecommunications. That, in combination with the simplicity of the programming, makes it easy to scale up. Stanford’s machine is currently able to solve 100-variable problems with any arbitrary set of connections between variables, and it has been tested on thousands of scenarios.

Inexact computing may improve answers
Researchers from Rice University, Argonne National Laboratory, and the University of Illinois at Urbana-Champaign have used one of Isaac Newton’s numerical methods to demonstrate how “inexact computing” can dramatically improve the quality of simulations run on supercomputers.

This work is part of an ongoing effort by scientists at Rice University’s Center for Computing at the Margins (RUCCAM) to dramatically improve the resolution of weather and climate models with new ultra-efficient approaches to supercomputing.

The research stems from an idea put forward in 2003 by RUCCAM Director Krishna Palem: Accuracy and energy are exchangeable in computation, and sacrificing minimal accuracy can yield tremendous energy savings.

“In many situations, having an answer that is accurate to seven or eight decimal places is of no greater value than having an answer that is accurate to three or four decimal places, and it is important or realize that there are very real costs, in terms of energy expended, to arrive at the more accurate answer,” Palem said. “The discipline of inexact computing centers on saving energy wherever possibly by paying only for the accuracy that is required in a given situation.”

Palem, who won a Guggenheim Fellowship in 2015 to adapt these approaches to climate and weather modeling, collaborated with Oxford University physicist and climate scientist Tim Palmer to show that inexact computing could potentially reduce by a factor of three the amount of energy needed to run weather models without compromising the quality of the forecast.

In the new research, Palem, working with colleagues at Rice, with a team at Argonne National Laboratory headed by Sven Leyffer and Stefan Wild, and with Marc Snir of the University of Illinois at Urbana-Champaign (UIUC) showed it is possible to leapfrog from one part of a computation to the next and reinvest the energy saved from inexact computations at each new leap to increase the quality of the final answer while retaining the same energy budget.

Palem likened the new approach to calculating answers in a relay of sprints rather than in a marathon: By cutting precision and handing off the saved energy, they achieve significant quality improvements. This model allowed the team to change the way computational energy resources are utilized in supercomputers to dramatically improve solutions within a fixed energy budget.

The research team took advantage of one of the most commonly used tools of numerical analysis, a method known as Newton-Raphson that was created in the 1600s by Isaac Newton and Joseph Raphson. In supercomputing, the method is used to allow high-performance computers to find successively better approximations to complex mathematical functions.

They demonstrated that the solution’s quality could be improved by more than three orders of magnitude for a fixed energy cost when an inexact approach to calculation was used rather than a traditional high-precision approach.

Related Stories
The Week In Review: Design (Oct 21, 2016)
RF verification in Tanner suite; Wi-Fi and video IP; growth in IC market, EDA PCB & MCM market projected; ARM education.
System Research Bits: Oct. 18, 2016
Diamonds aren’t forever; Marconi-inspired chip design; controlling nanomaterials.



Leave a Reply


(Note: This name will be displayed publicly)