System Bits: Oct. 3

Magic dust supercomputer; ML bug repair; brain-like photonic chips.


Polariton graphs
In a development that a team of researchers from the UK and Russia say could eventually surpass the capabilities of even the most powerful supercomputers, a type of ‘magic dust’ — which combines light and matter — can be used to solve complex problems.

Hailing from the University of Cambridge, University of Southampton and Cardiff University in the UK and the Skolkovo Institute of Science and Technology in Russia, researchers have used quantum particles known as polaritons – which are half light and half matter – to act as a type of ‘beacon’ showing the way to the simplest solution to complex problems.

Creating polariton condensates in the vertices of an arbitrary graph and reading out the quantum phases that represent the absolute minimum of an XY Model. (Source: University of Cambridge)

They expect this could form the basis of a new type of computer that can solve problems that are currently unsolvable, in diverse fields such as biology, finance or space travel.

The team said that technological progress, from modeling protein folding and behavior of financial markets to devising new materials and sending fully automated missions into deep space, depends on the ability to find the optimal solution of a mathematical formulation of a problem: the absolute minimum number of steps that it takes to solve that problem.

They explained that the search for an optimal solution is analogous to looking for the lowest point in a mountainous terrain with many valleys, trenches, and drops. A hiker may go downhill and think that they have reached the lowest point of the entire landscape, but there may be a deeper drop just behind the next mountain. Such a search may seem daunting in natural terrain, but imagine its complexity in high-dimensional space.

Modern supercomputers can only deal with a small subset of such problems when the dimension of the function to be minimized is small or when the underlying structure of the problem allows it to find the optimal solution quickly even for a function of large dimensionality.

Even a hypothetical quantum computer, if realized, offers at best the quadratic speed-up for the “brute-force” search for the global minimum.

The team approached the problem from an unexpected angle: What if instead of moving along the mountainous terrain in search of the lowest point, one fills the landscape with a magical dust that only shines at the deepest level, becoming an easily detectible marker of the solution?

Theses ‘magic dust’ polaritons are created by shining a laser at stacked layers of selected atoms such as gallium, arsenic, indium, and aluminum. The electrons in these layers absorb and emit light of a specific color. Polaritons are ten thousand times lighter than electrons and may achieve sufficient densities to form a new state of matter known as a Bose-Einstein condensate, where the quantum phases of polaritons synchronize and create a single macroscopic quantum object that can be detected through photoluminescence measurements.

The next question the researchers had to address was how to create a potential landscape that corresponds to the function to be minimized and to force polaritons to condense at its lowest point. To do this, the group focused on a particular type of optimization problem, but a type that is general enough so that any other hard problem can be related to it, namely minimization of the XY model which is one of the most fundamental models of statistical mechanics. The authors have shown that they can create polaritons at vertices of an arbitrary graph: as polaritons condense, the quantum phases of polaritons arrange themselves in a configuration that correspond to the absolute minimum of the objective function.

The team added that they are just at the beginning of exploring the potential of polariton graphs for solving complex problems, and are currently scaling up their device to hundreds of nodes, while testing its fundamental computational power. The ultimate goal is a microchip quantum simulator operating at ambient conditions.

Machine learning for bug-repair
Most commercial software has bugs and security holes that require regular patching, but most of it is relatively simple, MIT researchers reminded.

It is this simplicity that has encouraged computer scientists to explore the possibility of automatic patch generation. Several research groups, including that of Martin Rinard, an MIT professor of electrical engineering and computer science, have developed templates that indicate the general forms that patches tend to take. Algorithms can then use the templates to generate and evaluate a host of candidate patches.

A new machine-learning system analyzes successful repairs to buggy software and learns how to repair new bugs.
(Source: MIT)

Recently, at the Association for Computing Machinery’s Symposium on the Foundations of Software Engineering, Rinard, his student Fan Long, and Peter Amidon of the University of California at San Diego presented a new system that learns its own templates by analyzing successful patches to real software.

Where a hand-coded patch-generation system might feature five or 10 templates, the new system created 85, which makes it more diverse but also more precise. Its templates are more narrowly tailored to specific types of real-world patches, so it doesn’t generate as many useless candidates. In tests, the new system, dubbed Genesis, repaired nearly twice as many bugs as the best-performing hand-coded template system.

Long explained, “You are navigating a tradeoff. On one hand, you want to generate enough candidates that the set you’re looking through actually contains useful patches. On the other hand, you don’t want the set to include so many candidates that you can’t search through it.”

Every item in the data set on which Genesis was trained includes two blocks of code: the original, buggy code and the patch that repaired it. Genesis begins by constructing pairs of training examples, such that every item in the data set is paired off with every other item. Genesis then analyzes each pair and creates a generic representation — a draft template — that will enable it to synthesize both patches from both originals. It may synthesize other, useless candidates, too. But the representation has to be general enough that among the candidates are the successful patches.

Next, Genesis tests each of its draft templates on all the examples in the training set. Each of the templates is based on only two examples, but it might work for several others. Each template is scored on two criteria: the number of errors that it can correct and the number of useless candidates it generates. For instance, a template that generates 10 candidates, four of which patch errors in the training data, might score higher than one that generates 1,000 candidates and five correct patches.

On the basis of those scores, Genesis selects the 500 most promising templates. For each of them, it augments the initial two-example training set with each of the other examples in turn, creating a huge set of three-example training sets. For each of those, it then varies the draft template, to produce a still more general template. Then it performs the same evaluation procedure, extracting the 500 most promising templates.

After four rounds of this process, each of the 500 top-ranking templates has been trained on five examples. The final winnowing uses slightly different evaluation criteria, ensuring that every error in the training set that can be corrected will be. That is, there may be a template among the final 500 that patches only one bug, earning a comparatively low score in the preceding round of evaluation. But if it’s the only template that patches that bug, it will make the final cut.

In the researchers’ experiments, the final winnowing reduced the number of templates from 500 to 85. Genesis works with programs written in the Java programming language, and the MIT researchers compared its performance with that of the best-performing hand-coded Java patch generator. Genesis correctly patched defects in 21 of 49 test cases drawn from 41 open-source programming projects, while the previous system patched 11. It’s possible that more training data and more computational power — to evaluate more candidate templates — could yield still better results. But a system that allows programmers to spend only half as much time trying to repair bugs in their code would be useful nonetheless.

Brain-like photonic chips
A research team has made the pioneering breakthrough of the development of photonic computer chips that imitate the way the brain’s synapses operate.

The work, conducted by researchers from Oxford, Münster and Exeter universities, combined phase-change materials – commonly found in household items such re-writable optical discs – with specially designed integrated photonic circuits to deliver a biological-like synaptic response.

These photonic synapses can operate at speeds a thousand times faster than those of the human brain, the team said. Further, they believe that the research could pave the way for a new age of computing, where machines work and think in a similar way to the human brain, while at the same time exploiting the speed and power efficiency of photonic systems.

(Source: Oxford University)

Professor Harish Bhaskaran from the Department of Materials at Oxford University, who led the team, said: ‘The development of computers that work more like the human brain has been a holy grail of scientists for decades. Via a network of neurons and synapses the brain can process and store vast amounts of information simultaneously, using only a few tens of watts of power. Conventional computers can’t come close to this sort of performance.’

Professor C David Wright, co-author from the University of Exeter, said, “Electronic computers are relatively slow, and the faster we make them the more power they consume. Conventional computers are also pretty ‘dumb,’ with none of the in-built learning and parallel processing capabilities of the human brain. We tackle both of these issues here – by developing not only new brain-like computer architectures, but also by working in the optical domain to leverage the huge speed and power advantages of the upcoming silicon photonics revolution.”

Professor Wolfram Pernice, a co-author of the paper from the University of Münster, added, “Since synapses outnumber neurons in the brain by around 10,000 to one, any brain-like computer needs to be able to replicate some form of synaptic mimic. That is what we have done here.”

Leave a Reply

(Note: This name will be displayed publicly)