System Bits: May 8

AI and the brain; light-manipulated electrons; memristor cybersecurity device.


Unlocking the brain
Stanford University researchers recently reminded that for years, the people developing artificial intelligence drew inspiration from what was known about the human brain, and now AI is starting to return the favor: while not explicitly designed to do so, certain AI systems seem to mimic our brains’ inner workings more closely than previously thought.

Daniel Yamins and the Stanford NeuroAI Lab are using artificial intelligence to better understand the brain.
Source: Stanford University

The team suggests that this means both AI and our minds have converged on the same approach to solving problems. If this is the case, simply watching AI at work could help researchers unlock some of the deepest mysteries of the brain, they suggest.

Daniel Yamins, assistant professor of psychology at Stanford, faculty scholar of the Stanford Neurosciences Institute, and a member of Stanford Bio-X said, “There’s a real connection there.” To this point, Yamins and his lab are building on that connection to produce better theories of the brain – how it perceives the world, how it shifts efficiently from one task to the next and perhaps, one day, how it thinks.

It is well known that AI has been borrowing from the brain since its early days, when computer scientists and psychologists developed algorithms called neural networks that loosely mimicked the brain. Those algorithms were frequently criticized for being biologically implausible in that the “neurons” in neural networks were gross simplifications of the real neurons that make up the brain.

Still, computer scientists didn’t care about biological plausibility: they just wanted systems that worked. As a result, neural network models were extended in whatever way made the algorithm best able to carry out certain tasks, culminating in what is now called deep learning.

Interestingly, in 2012, AI researchers showed that a deep learning neural network could learn to identify objects in pictures as well as a human being. This made neuroscientists wonder how deep learning did this. Yamins and his team showed in 2014 that a deep learning neural network does this the same way the brain does, and that the computations the deep learning system performed matched activity in the brain’s vision-processing circuits substantially better than any other model of those circuits. Other researchers made similar observations about parts of the brain’s vision– and movement-processing circuits, suggesting that given the same kind of problem, deep learning and the brain had evolved similar ways of coming up with a solution. More recently, Yamins and colleagues have demonstrated similar observations in the brain’s auditory system.

Since 2014, Yamins and collaborators have been refining their original goal-directed model of the brain’s vision circuits and extending the work in new directions, including understanding the neural circuits that process inputs from rodents’ whiskers.

In what they describe as their most ambitious project, Yamins and postdoctoral fellow Nick Haber are investigating how infants learn about the world around them through play.

Light could make semiconductor computers go quantum
University of Marburg, University of Regensburg, and University of Michigan researchers have demonstrated how infrared laser pulses can shift electrons between two different states — the classic 1 and 0 — in a thin sheet, as part of a semiconductor technique that manipulates electrons with light, which could bring quantum computing up to room temperature.

Mackillo Kira, University of Michigan professor of electrical engineering and computer science said, “Ordinary electronics are in the range of gigahertz, one billion operations per second. This method is a million times faster.”

An illustration showing the “up” and “down” pseudospin states, a light pulse and the hilly energy landscape experienced by the electrons.
Source: University of Regensburg

Kira reminded that quantum computing could solve problems that take too long on conventional computers, advancing areas such as artificial intelligence, weather forecasting and drug design. Quantum computers get their power from the way that their quantum-mechanical bits, or qubits, aren’t merely 1s or 0s, but they can be mixtures— known as superpositions—of these states.

“In a classical computer, each bit configuration must be stored and processed one by one while a set of qubits can ideally store and process all configurations with one run. This means that when you want to look at a bunch of possible solutions to a problem and find the best fit, quantum computing can get you there a lot faster.”

However, qubits are hard to make because quantum states are extremely fragile. The main commercial route, pursued by companies such as Intel, IBM, Microsoft and D-Wave, uses superconducting circuits—loops of wire cooled to extremely cold temperatures (-321°F or less), at which the electrons stop colliding with each other and instead form shared quantum states through a phenomenon known as coherence.

But rather than finding a way to hang onto a quantum state for a long time, the new study demonstrates a way to do the processing before the states fall apart.

Rupert Huber, professor of physics at the University of Regensburg, who led the experiment said, “In the long run, we see a realistic chance of introducing quantum information devices that perform operations faster than a single oscillation of a lightwave. The material is relatively easy to make, it works in room temperature air, and at just a few atoms thick, it is maximally compact.”

An artist’s rendering of a pulse of circularly polarized light hitting a 2-D semiconductor, putting the electrons into a pseudospin state that could store information as part of a new, faster computing technology.
Source: University of Michigan

Read the details of their work here.

Memristor as black box
While Internet of Things may already be making our lives more streamlined and convenient, the cybersecurity risk posed by millions of wirelessly connected gadgets, devices and appliances remains a huge concern. To combat this, UC Santa Barbara electrical and computer engineering professor Dmitri Strukov and his team are working on putting an extra layer of security on the growing number of internet- and Bluetooth-enabled devices with technology that aims to prevent cloning, the practice by which nodes in a network are replicated and then used to launch attacks from within the network. A chip that deploys ionic memristor technology, it is an analog memory hardware solution to a digital problem.

An illustration of a memristor as a cybersecurity device.
Source: UCSB

This is essentially a black box, as the chip is physically unclonable and can thus render the device invulnerable to hijacking, counterfeiting or replication by cyber criminals.
Key to this technology is the memristor, or memory resistor — an electrical resistance switch that can “remember” its state of resistance based on its history of applied voltage and current.

Not only can memristors can change their outputs in response to their histories, but each memristor, due to the physical structure of its material, also is unique in its response to applied voltage and current, the team explained. This means that a circuit made of memristors results in a black box of sorts, with outputs extremely difficult to predict based on the inputs.

“The idea is that it’s hard to predict, and because it’s hard to predict, it’s hard to reproduce,” Strukov said. The multitude of possible inputs can result in at least as many outputs — the more memristors, the more possibilities. Running each would take more time than an attacker may reasonably have to clone one device, let alone a network of them.

The use of memristors in today’s cybersecurity is especially significant in light of machine learning-enabled hacking, in which artificial intelligence technology is trained to “learn” and model inputs and outputs, then predict the next sequence based on its model. With machine learning, an attacker doesn’t even need to know what exactly is occurring as the computer is trained on a series of inputs and outputs of a system, they added.

Read additional details here.

Leave a Reply

(Note: This name will be displayed publicly)