Pessimism, Optimism And Neuromorphic Computing

Whether machines solve problems like humans isn’t important.

popularity

As I’ve been researching this series on neuromorphic computing, I’ve learned that there are two views of the field. One, which I’ll call the “optimist” view, often held by computer scientists and electrical engineers, focuses on the possibilities: self-driving cars. Homes that can learn their owners’ needs. Automated medical assistants. The other, the “pessimist” view, often held by neuroscientists, points to the vast capabilities of biological brains and scoffs at the idea that an assembly of millions or even billions of memory elements can come close to “thinking” as humans understand it.

Both views are correct. In an interview, Praveen Raghavan, a distinguished member of IMEC’s technical staff in charge of neuromorphic computing activities, observed that IMEC focuses on research leading to technology products. Even solutions that pessimists dismiss have value. Many tasks that are trivial for humans can still be the basis for commercial products.

One of the most well-known applications of neuromorphic computing, image recognition, illustrates both the evolution and limits of neuromorphic algorithms. Semiconductor industry veterans may remember skepticism surrounding the introduction of automated inspection and defect classification tools. Humans were more able to recognize damaged wire bonds or types of particles in test runs, but humans get bored or distracted over the course of a shift. Now, improved automated tools are able to compare “good” and “bad” structures, and even to offer increasingly accurate defect classifications.
Similarly, facial recognition is an easy job for humans. We’ve been doing it all our lives, depending on mechanisms that our ancestors evolved over millennia. We can decide whether a person testing door knobs in an office building is a security guard or a burglar, whether a shopper is putting cosmetics into a purse or a shopping basket, whether the traveler picking up a suitcase is the same one who put it down five minutes ago.

In fact, humans are so good at these tasks that we find them boring. We’d rather watch the child and puppy on the playground, or the couple arguing on Concourse B. We also, unfortunately, have a well-documented tendency to draw different conclusions about members of marginalized groups. These limitations, the optimists argue, represent important jobs for computers that can learn, even — perhaps especially — in tasks that humans find trivial.

On the other hand, as pessimists are sure to point out, even these “simple” tasks remain difficult for computers. Monitoring a security camera requires both image recognition and the ability to examine the movement of objects through time and space. What movement patterns are “normal” or “suspicious?” Can the system make that determination on its own, without a large library of human-labeled — and therefore human-biased — training data? Raghavan’s group recently demonstrated a chip that can compose simple, one-voice melodies, but cross-correlations between a piano player’s left and right hands, much less between instruments, still exceed its abilities.

Because of their very triviality, pessimists also question how relevant such models are to the study of biological brains. Neuromorphic systems as presently constituted share only crude similarities with biological systems. Indeed, some prominent psychologists and neuroscientists argue that the computational model of intelligence is itself fundamentally flawed, leading to misleading analogies that bring more confusion than insight.

This series, written for a semiconductor industry publication, is inherently optimistic. Interesting process technologies and new system architectures may help solve important problems. Whether they do so “like humans would” is not necessarily a concern. It’s important to understand the pessimistic viewpoint, though, both to temper expectations for neuromorphic systems and to remind ourselves how much we still do not understand.

Related Stories
Neuromorphic Computing: Modeling The Brain
Competing models vie to show how the brain works, but none is perfect.
What’s Next For Transistors
New FETs, qubits, neuromorphic approaches, and advanced packaging.
Neural Net Computing Explodes
Deep-pocket companies begin customizing this approach for specific applications—and spend huge amounts of money to acquire startups.
Inside Neuromorphic Computing (Jan 2016)
General Vision’s chief executive talks about why there is such renewed interest in this technology and how it will be used in the future.
Neuromorphic Chip Biz Heats Up (Jan 2016)
Old concept gets new attention as device scaling becomes more difficult.



Leave a Reply


(Note: This name will be displayed publicly)