Manufacturing Bits: May 10

Synaptic transistors; hyperdimensional computing; new AI theories.

popularity

Synaptic transistors
The University of Hong Kong and Northwestern University have developed an organic electrochemical synaptic transistor, a technology that could one day process and store information like the human brain.

Researchers have demonstrated that the transistor can mimic the synapses in the human brain. It can build on memories to learn over time, according to researchers.

This represents one of a multitude of efforts in the world to realize true artificial intelligence (AI). Today, machine learning is the most common form of AI. A subset of AI, machine learning uses a neural network in a system, which crunches vast amounts of data and identifies patterns. Then, the network matches certain patterns and learns which of those attributes are important. Many of today’s systems based on machine learning use traditional chip architectures like GPUs, SRAM and others.

Neural networks consist of multiple neurons and synapses. These are not biological organisms. A neuron could consist of a memory cell with logic gates. The neurons are daisy-chained and connected with a link called a synapse. Basically, an artificial neural network (ANN) has three layers—input, hidden, and output. Each consists of neurons, which are connected by a synapse.

Neural networks function by calculating matrix products and sums. “An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain,” according to Wikipedia. “Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron that receives a signal then processes it and can signal neurons connected to it. The ‘signal’ at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are called edges. Neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times.”

Meanwhile, the industry also has been working on a non-traditional approach called neuromorphic computing, which is still several years away from being realized. Neuromorphic computing may also use a neural network. The difference is the industry is attempting to replicate the brain in silicon. The goal is to mimic the way that information is processed using precisely-timed pulses.

Companies, government agencies, R&D organizations and universities are all working on neuromorphic computing. In one effort, the University of Hong Kong and Northwestern University have demonstrated the ability to mimic associative learning using an ion-trapping non-volatile synaptic organic electrochemical transistor (OECT).

An OECT resembles a traditional transistor with a source, gate and drain. But instead of a silicon-based channel, an OECT consists of a plastic-like composite as the active channel. All told, the OECT has a write bias less than 0.8 V and retention time longer than 200 min without decoupling the write and read operations, according to researchers in Nature Communications, a technology journal.

In operation, the OECT can trap ions. “In the brain, a synapse is a structure through which a neuron can transmit signals to another neuron, using small molecules called neurotransmitters,” according to Northwestern. “In the synaptic transistor, ions behave similarly to neurotransmitters, sending signals between terminals to form an artificial synapse. By retaining stored data from trapped ions, the transistor remembers previous activities, developing long-term plasticity.”

Researchers integrated pressure and light sensors into the circuit. They trained the circuit to associate the two unrelated physical inputs (pressure and light) with one another.

“Although the modern computer is outstanding, the human brain can easily outperform it in some complex and unstructured tasks, such as pattern recognition, motor control and multisensory integration,” said Jonathan Rivnay, an assistant professor of biomedical engineering at Northwestern’s McCormick School of Engineering. “This is thanks to the plasticity of the synapse, which is the basic building block of the brain’s computational power. These synapses enable the brain to work in a highly parallel, fault tolerant and energy-efficient manner. In our work, we demonstrate an organic, plastic transistor that mimics key functions of a biological synapse.

“While our application is a proof of concept, our proposed circuit can be further extended to include more sensory inputs and integrated with other electronics to enable on-site, low-power computation,” Rivnay said. “Because it is compatible with biological environments, the device can directly interface with living tissue, which is critical for next-generation bioelectronics.”

Hyperdimensional computing
A group of researchers are taking a different approach to AI.

The University of California at San Diego, the University of California at Irvine, San Diego State University and DGIST recently presented a paper on a new hardware algorithm based on hyperdimensional (HD) computing, which is a brain-inspired computing model. The new algorithm, called HyperRec, uses data that is modeled with binary vectors in a high dimension.

HD computing solves several problems. “In today’s world, technological advances are continually creating more data than what we can cope with. Much of data processing will need to run at least partly on devices at the edge of the internet, such as sensors and smartphones. However, running existing machine learning on such systems would drain their batteries and be too slow,” said Tajana Simunic Rosing, a professor at the Department of Computer Science and Engineering at the University of California in San Diego, in an e-mail exchange.

“Hyperdimensional (HD) computing is a class of learning algorithms that is motivated by the observation that the human brain operates on a lot of simple data in parallel. In contrast to today’s deep neural networks and other similar algorithms, systems that use HD computing to learn will be able to run at least 1,000x more efficiently, can be implemented directly in non-volatile memory, and are natively more secure as they use a large number of bits (~10,000) to encode and process data in parallel. Most importantly, such systems can explain how they made decisions, resulting in sensors and phones that can learn directly from the data they obtain without the need for the cloud at minimum impact to their battery lifetime,” Rosing said.

Researchers are developing hyperdimensional computing software and hardware infrastructure. “Hyperdimensional computing employs much larger data sizes. Instead of 32- or 64-bit computing, an HD approach would have data containing 10,000 bits or more. But to fully realize its potential, our researchers must continue to develop new coding and decoding strategies, fast algorithms, and make a push into meaningful hardware demonstrations,” said Todd Younkin, president and chief executive of the Semiconductor Research Corp. (SRC), in a recent interview.

The goal is to develop novel algorithms supporting key cognitive computations in high-dimensional space, including classification, clustering and regression. Another goal is to devise efficient HD computing on sensors and mobile devices, which cover hardware accelerators such as GPUs, FPGAs and other chips.

Prototypes will be built and tested in smart homes and other applications. “Our current results indicate that HD classification and clustering is as accurate as the state-of-the-art algorithms, but is much faster, and is up to six orders of magnitude more energy efficient when accelerated in memory. HD computing is very robust to errors, providing equal accuracy in very high bit error rate regimes as when there are no errors,” Rosing said.

In one of its latest HD computing efforts, researchers devised a new hardware-friendly recommendation algorithm called HyperRec. “Unlike existing solutions, which leverages floating-point numbers for the data representation, in HyperRec, users and items are modeled with binary vectors in a high dimension,” said Yunhui Guo, a researcher from University of California at San Diego, in the paper. “In HyperRec, we encode users, items and ratings using hyperdimensional binary vectors, called hypervectors. We represent the relation between users and items via the binding and bundling operation in HD computing. The recommendation phase is based on the ‘nearest neighbor’ principle.”

Researchers also demonstrated how to accelerate HyperRec on parallel computing platforms. “The results show that our FPGA implementation is on average 67.0X faster and 6.9X higher energy efficiency as compared to CPU. Our GPU implementation further achieves on average 3.1X speedup compared to FPGA,” according to the paper.

New AI theories
Researchers are also developing other AI theories as well. For example, Geoffrey Hinton recently presented a new AI theory called GLOM.

Meanwhile, Numenta has devised what is calls the Thousand Brains Theory. On Numenta’s site, here’s a discussion of these two efforts–GLOM versus Numenta. In another blog, Numenta also explains why artificial neural networks struggle to learn continually and consequently suffer from catastrophic forgetting.



Leave a Reply


(Note: This name will be displayed publicly)