中文 English

Power/Performance Bits: Jan. 26


Neural networks on MCUs Researchers at MIT are working to bring neural networks to Internet of Things devices. The team's MCUNet is a system that designs compact neural networks for deep learning on microcontrollers with limited memory and processing power. MCUNet is made up of two components. One is TinyEngine, an inference engine that directs resource management. TinyEngine is optimized t... » read more

Power/Performance Bits: Dec. 7


Logic-in-memory with MoS2 Engineers at École Polytechnique Fédérale de Lausanne (EPFL) built a logic-in-memory device using molybdenum disulfide (MoS2) as the channel material. MoS2 is a three-atom-thick 2D material and excellent semiconductor. The new chip is based on floating-gate field-effect transistors (FGFETs) that can hold electric charges for long periods. MoS2 is particularly se... » read more

Power/Performance Bits: Oct. 27


Room-temp superconductivity Researchers at the University of Rochester, University of Nevada Las Vegas, and Intel created a material with superconducting properties at room temperature, the first time this has been observed. The researchers combined hydrogen with carbon and sulfur to photochemically synthesize simple organic-derived carbonaceous sulfur hydride in a diamond anvil cell, which... » read more

Neural Networks Without Matrix Math


The challenge of speeding up AI systems typically means adding more processing elements and pruning the algorithms, but those approaches aren't the only path forward. Almost all commercial machine learning applications depend on artificial neural networks, which are trained using large datasets with a back-propagation algorithm. The network first analyzes a training example, typically assign... » read more

Power/Performance Bits: June 2


Neuromorphic memristor Researchers at the University of Massachusetts Amherst used protein nanowires to create neuromorphic memristors capable of running at extremely low voltage. A challenge to neuromorphic computing is mimicking the low voltage at which the brain operates: it sends signals between neurons at around 80 millivolts. Jun Yao, an electrical and computer engineering researcher at ... » read more

The Good And Bad Of 2D Materials


Despite years of warnings about reaching the limits of silicon, particularly at leading-edge process nodes where electron mobility is limited, there still is no obvious replacement. Silicon’s decades-long dominance of the integrated circuit industry is only partly due to the material’s electronic properties. Germanium, gallium arsenide, and many other semiconductors offer superior mobili... » read more

Power/Performance Bits: Dec. 26


2nm memristors Researchers at the University of Massachusetts Amherst and Brookhaven National Laboratory built memristor crossbar arrays with a 2nm feature size and a single-layer density up to 4.5 terabits per square inch. The team says the arrays were built with foundry-compatible fabrication technologies. "This work will lead to high-density memristor arrays with low power consumption fo... » read more

AI Architectures Must Change


Using existing architectures for solving machine learning and artificial intelligence problems is becoming impractical. The total energy consumed by AI is rising significantly, and CPUs and GPUs increasingly are looking like the wrong tools for the job. Several roundtables have concluded the best opportunity for significant change happens when there is no legacy IP. Most designs have evolved... » read more

Integrating Memristors For Neuromorphic Computing


Much of the current research on neuromorphic computing focuses on the use of non-volatile memory arrays as a compute-in-memory component for artificial neural networks (ANNs). By using Ohm’s Law to apply stored weights to incoming signals, and Kirchoff’s Laws to sum up the results, memristor arrays can accelerate the many multiply-accumulate steps in ANN algorithms. ANNs are being dep... » read more

What’s Next In Neuromorphic Computing


To integrate devices into functioning systems, it's necessary to consider what those systems are actually supposed to do. Regardless of the application, [getkc id="305" kc_name="machine learning"] tasks involve a training phase and an inference phase. In the training phase, the system is presented with a large dataset and learns how to "correctly" analyze it. In supervised learning, the data... » read more

← Older posts