Brain-Inspired Power

Neurosynaptic cores are a big step toward making computers more efficient, but they’re still kilowatts from the goal.

popularity

“Let’s be clear: we have not built the brain, or any brain. We have built a computer that is inspired by the brain. The inputs to and outputs of this computer are spikes. Functionally, it transforms a spatio-temporal stream of input spikes into a spatio-temporal stream of output spikes.”
Dharmendra Modha, IBM Fellow

It’s generally a well-accepted principle that the biggest savings from an energy-efficiency standpoint are to be had at the architectural level. For years, scientists have been working on creating computational models for the brain. Clearly there are some major differences in architecture between the brain and the silicon-based computers that we use on a daily basis and the paradigm here is to build a computing architecture that more closely matches the brain model.

I somewhat liken the statement that the new model takes inputs and produces outputs that are spikes to saying that the computers that we’re most used to using take inputs and produce outputs that are 1’s and 0’s. There’s still the necessary translation to and from the 1’s and 0’s in order to make the computers (more) useful. The same will be true for the neurosynaptic chips and to that extent; the IBM research team is hard at work producing an ecosystem that will make it easier to interface with this new technology.

The new TrueNorth chip contains a 64 x 64 array of “neurosynaptic” cores. The 4096 core chip is capable of modeling 1 million neurons and 256 million synapses and runs at 70 milliwatts. There’s a short animated simulation video on YouTube. To give you a rough idea on how the neuron count compares to the number or neurons in animals, a honeybee has about 1 million neurons and about 1 billion synapses all packed within its bee-sized head. The TrueNorth chip is claimed to have a density of 14.7um² per neuron, so if we multiply that out times 1 million neurons, we’d have 1.47cm² for that portion alone. The chip is said to contain 5.4 billion transistors and be the largest transistor-count chip that IBM has done to date. It is manufactured in a Samsung 28nm CMOS technology and is shown in the picture below.

IBMSyNAPSE_core_array
Figure 1. TrueNorth Chip Layout (Courtesy of IBM)

One goal of this chip is to use it to build a system containing 4096 chips in a single rack to simulate 1 trillion synapses (gorilla territory) using only 4kW of power. It’s interesting to note that if each chip uses only 70mW, then less 300W of the 4kW total would come from the chips (or about 7%) and the rest would be additional system overhead.

To put this all into context with a “human-scale” brain, the simulation would need to handle ~100 trillion synapses. If (and it’s a really big if) the power scaled the same up to 100 trillion synapses, we be looking at ~400kW of power (most likely more, much more). In comparison the human brain uses on the order of 20W, so we’re still looking at 4+ orders of magnitude differences in power. That’s not to sell short the jump forward that has been made in efficiency here though. A 100 trillion synapse simulation required 96 Blue Gene/Q racks of the Lawrence Livermore National Lab Sequoia supercomputer and it ran 1,500 times slower than real-time. Sequoia is listed as using 7.8MW of power, so there appears to be some clear energy efficiency benefits due to the architecture of the new TrueNorth chip. Nature still has not only an architectural advantage but a technology advantage too in using “wetware” rather than silicon. The new TrueNorth chips should open up some interesting and exciting new opportunities for research in cognitive computing.



1 comments

Jayna Sheats says:

Well stated and useful data that is sometimes hard to bring together; thanks.

Leave a Reply


(Note: This name will be displayed publicly)