Decoding The Brain

How will decoding the brain of a fruit fly help us build better computers? Lou Scheffer says we have a long way to go before electronics catches up with image recognition.

popularity

At the Design Automation Conference this year, Lou Scheffer, principal scientist for the Howard Hughes Medical Institute, gave a visionary talk entitled Learning from Life: Biologically Inspired Electronic Design.

Scheffer is an IC design guy who came through Stanford and Caltech and worked for HP and Cadence before switching to the medical field eight years ago. Today, he has a goal of mapping the brain of the fruit fly using electron microscopy. He wants to find out about every neuron and synapse and how everything is connected together. In parallel, engineers are trying to put similar concepts into practice in chips. And even though they may be far behind biology in terms of sophistication, they already are leading to systems with much better performance per watt for some applications.

scheffer

Scheffer’s visionary talk
Scheffer said that if you ask someone to talk about a sophisticated piece of electronics, they will probably mention a smart phone. But he argued that this is only the second most sophisticated technology in use every day. “Life is much more sophisticated.” He pointed out that if you want to build the chip in your cell phone, you need a factory that costs billions of dollars and is full of complicated equipment and has to be kept enormously clean because a single speck of dust will destroy the entire product. In contrast, biology works fine in dirty environments.

Furthermore, life learns and adapts in a way that we only wish our gadgets did. “Consider a worm – 302 neurons in the whole animal. If you take a dish containing some worms and tap it repeatedly, then the first time you tap the dish, they all jump. By the 10th time they have figured out that it is harmless and they no longer jump. In terms of machine learning, this is spectacular performance.” The worms learned a tangible result in 10 trials, unsupervised. “It did that with 300 neurons. Furthermore, life does an amazing job of recognizing objects from incomplete data. It does that because this is a life or death problem.”

Scheffer showed a picture of a snake and said this was a very hard vision problem. “You can only see part of the snake, much of it is obscured and the remainder is out of focus. It has stuff in the foreground. It is a 2-D representation. And you don’t know about the lighting. There is probably no vision program in the world that could do that. It is something that machines do poorly that humans do so easily.”

He contrasted the different ways in which humans and machines do things in different ways. “Vision is a problem that we have been working on for 50 years and we still don’t have anything that works nearly as well as biology. Instead, it is time to study the brain and figure out how it does it, and to copy those methods. It is easier to copy than invent.”

Scheffer said there is another reason to study the brain, which is to understand what happens when they malfunction. “There are all sorts of brain diseases – depression, addiction, etc., and billions of people around the world suffer from these. From an engineering point of view, how can you fix something that you don’t understand? Our current methods are based on trial and error, and often more like voodoo.”

Nervous systems and electronic systems have a lot of similarities. They collect inputs, apply a non-linearity and generate an output. That means that electronic techniques may be good for studying the brain and, conversely, things that we learn from the brain could be used to improve electronics. To this end, Scheffer is attempting to reverse-engineer the brain.

It turns out that a fly will learn to avoid an otherwise neutral odor if you present it with a shock. Once you do this the fly thinks the odor is unpleasant and will turn around and go the other way. The opposite happens for a pleasant experience. The fly is attracted to it. “Through genetic techniques, and an electron microscope, we have a system-level diagram of how this works.”

Scheffer explained that olfactory signals come in and signals of like type get summed, so as to increase the signal to noise. Then it generates a sparse representation over thousands of neurons, and that goes into decision boxes. In these boxes, whenever a pleasant or unpleasant thing happens, it remembers that sparse representation and the next time that representation occurs, the fly will either avoid it or be attracted to it. “Examining this we came up with a detailed circuit that represents 1,100 neurons and 200,000 synapses. This is the first time we have an understanding at the mechanistic level of how memory and learning occurs. There are similar projects of this size for retinas and in five years we will have the whole fly—100k neurons, 100M synapses.”

Similarly, the zebra fish larva is often studied because it is transparent. Using genetic techniques, researchers can see when neurons fire. This is done by placing a dye in the cells that fluoresces when there are high levels of calcium, which happens when the neuron fires. That means you can see it think.

Computer circuit board in the form of the human brain

In the past you had to stick electrodes in the brain. This had lots of problems. Hundreds of wires, limited bandwidth, bad connectors spaced far apart. Four institutions got together and worked with IMEC to build a chip that does this better. It is a square chip with a spike sticking out of it. On the spike are a number of pads, and that gets embedded into the brain. Those pads pick up the signals from the neurons they are near, and the rest of the circuit helps to record those signals. This is standard analog electronics. This enables researchers to look at many more sites and with much higher signal quality.

Another problem is that you want to study the brain of a moving animal, but that is hard because probes and microscopes don’t work very well when it is scurrying around. Instead you hold the animal in place, you measure it as it tries to move, and you project in front of it to make it think that it is moving. This is the matrix implemented for animals. As far as researchers can tell, the mouse does not know the difference between going around a real maze compared to an imaginary one. It also works for the fly and the zebra fish. With the zebra fish it is in a tube and can’t even move its tail, but they watch the nerves and can figure out that it is trying to move its tail and move the scenery accordingly as if it is swimming. “Just like in the matrix, we can adjust those strengths so the fish has super human strength or no strength at all,” joked Scheffer.

Once they decipher all of this stuff, how does it translate into electronics? “There are two paths — software and hardware,” says Scheffer. “It is possible that the hardware is already capable enough and we don’t know the software. It is not clear if, to get biological performance, you need neurons or if it can be done with conventional electronics.” Scheffer says there are existence proofs for just software, such as AlphaGo, which recently beat the world expert in Go. “This could have been done 30 years ago if only we had known how. It is possible that our cellphones could do so much more, but we just don’t know how to program them. There is a reason for that. It is because people can only think about a fraction of the possible topologies. If you look at a neural net you see something like that. Deep learning is based on the same idea.”

neuralnets

Neural Nets are extremely different from biology networks. Pictured is a network of a cell that can be found in the first layer of the visual systems. “There are feedforward connections and this is the only kind of connection that AlphaGo or neural networks use,” explains Scheffer. “When you look at biology, it is also full of bi-directional connections, same layer connections and they even have reverse connections. We really don’t understand how networks that contain these connections work, or how they are trained, but it is very likely that this is what results in the difference in performance on tasks such as image recognition.”

If new hardware is needed, there are a lot of people working on that. Stanford has a project called Brains in Silicon, where they build big analog electronics. IBM has a project called True North, where they build big digital neurons and they talk over a big event driven framework. “There is also a technique using memristor learning, which may be a possibility for chips in the future. It is based on spike-timing-dependent plasticity, and they can learn while operating.”

Scheffer concluded his talk by saying: “This is the golden age of neuro-science. We finally have a chance to understand the brain. The tools are finally equal to the task and the field is ripe for breakthroughs. In order to improve electronics, don’t study electronics, study the brain.”

Current state of neural networks
When looking at the current state of neural network development, Drew Wingard, chief technology officer at Sonics, explains that “the basic aspect of the computation is incredibly regular and equally frustrating. The ideal hypothetical structure is that everyone is connected to everyone else so that every neuron can affect every other through the coupling. This could be one gigantic matrix multiply, but in reality most things are not actually connected and so you really need a sparse matrix. That is where the frustrating aspects come in.”

The networks in use today have independent training and inferencing platforms. “Convolutional neural networks have emerged as a leading form of pattern recognizers because they allow for a simple, systematic [but computationally-intensive] method for training,” says , fellow at Cadence. “Training requires extremely heavy DSP-like computation (back propagation of errors with steepest-descent optimization). The recognition function itself (“forward inference”) is still computationally demanding, with some applications requiring tens of tera-ops per image frame, for example, but many neural network applications are comfortably within the capabilities of current high-end DSPs, and new CNN-optimized DSPs are coming. Moreover, a network may be trained once and used billions or trillions of times (over time and across a world of installed devices), so that the high expense of training is amortized across many uses.”

Why is training so compute intensive? , founder of BDTi and Embedded Vision Alliance, says that “many applications have huge amounts of ambiguity. People come in many shapes, sizes, colors, clothing, orientations, postures, ways of moving, etc. It is hard to spell out formulaic ways to define things. This size, this shape, this color and get a reliable result. Learning algorithms have shown that they are in some applications equal to humans in discriminating between objects.”

While animals do the training interactively, neural nets have to be trained on stored images. “We do off-line training using a GPU farm—and potentially look at 100,000 images of faces—and then create a CNN graph that is programmed into the object detection engine,” explains Mike Thompson, senior manager of product marketing for embedded vision in Synopsys. “That is fixed. The object detection engine is programmable and has a specialized hardware implementation so that we get high efficiency out of it.”

As Scheffer pointed out, electronics has a lot to learn from the brain. Wingard adds that “putting together the right processors with the right communications network between them is the challenge. From a design perspective there is a difference between what ideal hardware would look like for training the network versus running the network. There are people proposing architectures where they do have optimized solutions for training.”

Rowen counters that “we should not be trying to mimic biological systems everywhere, but anything that pushes electronics thinking outside its traditional box has the potential to inspire new kinds of innovation.” He cites early anatomical and behavioral studies of brain function that helped to motivate the first work in brain-inspired vision such as the Perceptron by Frank Rosenblatt of the Cornell Aeronautical Laboratory. “This set the stage for more than a half century of exploration with increasing powerful neural networks that evolved increasing complexity and more aggressive training methods, but these methods largely remained curiosities, subject to more philosophical debate on the nature of consciousness than practical industry deployment even in simple recognition systems.”

Rowen goes on to say that “in just the past five years, two forces have combined to create a renaissance in deep learning – the emergence of highly parallel processor architectures and the availability of truly large datasets suitable to for training complex networks. This combination has triggered rapidly expanded interest and huge progress in building practical vision, speech, and language recognition systems. Significantly, success in deep learning systems is not closely tied to authenticity in reproducing biological models. Instead, teams are leveraging the unique advantages of digital semiconductors, especially the opportunity for very high clock rates that are many orders of magnitude faster than human brain MHz, and large exact storage to build systems with recognition performance comparable to, and sometimes better than, humans. These systems are either not as highly parallel as the brain nor as energy efficient. We must also typically make a sharp distinction between training phases and inference (or simple recognition) phases. Moreover, these systems are largely limited to learning for explicitly labeled data. The area of unsupervised learning remains primarily limited to advanced research. That said, progress in fast, highly parallel systems is rapid. We already see a clear two to three order of magnitude improvement in energy efficiency and throughput based on innovations in neural-network-specific processor architectures, new optimization techniques to reduce network structure complexity, and higher degrees of parallelism that may soon allow quadrillions of ‘neural input’ evaluations (e.g., multiply-adds in convolutional neural networks) per second in attainable power and cost budgets.”

Related Stories
Inside AI And Deep Learning
What’s happening in AI and can today’s hardware keep up?
Convolutional Neural Networks Power Ahead
Adoption of this machine learning approach grows for image recognition; other applications require power and performance improvements.
Inside Neuromorphic Computing
General Vision’s chief executive talks about why there is such renewed interest in this technology and how it will be used in the future.



1 comments

Bhasker Raj says:

Lou Schrieffer presentation of the Biologically Inspired Design has many possible applications in various fields.
You might ask “Which is the fastest Supercomputer” I would say the human brain.
The working of the human is very complex and we are still trying to learn how the different modes work.
The Biologically Inspired Design has many advantages over the conventional design and has a bright future.
A. S. Bhasker Raj
Journalist, Freelance Writer and Author
Bangalore
India

Leave a Reply


(Note: This name will be displayed publicly)