Power/Performance Bits: Dec. 14

Improved digital sensing; speedy reservoir computing; closer is cooler.

popularity

Improved digital sensing
Researchers from Imperial College London and Technical University of Munich propose a technique to improve the capability of many different types of sensors. The method addresses voltage limits in analog-to-digital converters and the saturation that results in poor quality when an incoming signal exceeds those limits.

“Our new technique lets us capture a fuller range of stimuli in countless examples of digital technology, with applications ranging from everyday photography and medical scanners to extra-terrestrial exploration, bioengineering, and monitoring natural disasters. The hardware-software co-design approach opens up new scientific frontiers for further research,” said Ayush Bhandari from Imperial’s Department of Electrical and Electronic Engineering.

The researchers used ADCs that use ‘modulo’ sampling to test whether using a different type of voltage, called moduli, could help sensors process a greater range of information. A modulo refers to the remainder produced when the voltage of a signal is divided by the ADC’s maximum voltage.

The prototype includes an algorithm that triggers the ADC to switch to modulo voltage once the stimulus limit is reached and ‘folds’ these signals into smaller ones. Using this, the researchers were able to convert modulo measurements into smaller conventional digital signals that can be read by existing sensors.

“Today, we are surrounded by digital sensors which form a crucial part of the digital revolution. All digital sensors have maximum and minimum limits to what they can detect, but we’ve found a way to breach the upper limit with no theoretical maximum: the sky is the limit,” said Thomas Poskitt, an undergraduate at Imperial.

“By taking a modulo of the signal, we keep the voltage within the limit and reconstruct the full signal, even without knowing how many times the voltage has exceeded the limit. This can unlock a high dynamic range for any sensor which could, for example, allow cameras to see what humans cannot.”

“By combing new algorithms and new hardware we have fixed a common problem – one that could mean our digital sensors perceive what humans can, and beyond,” added Bhandari.

Professor Felix Krahmer from TUM said, “The key feature of our approach is that if a signal takes the voltage past the threshold, the hardware switches the signal from voltage to modulo, essentially resetting itself to let in a wider range of signals. What’s new about the current paper is that it presents the first unified approach with both a hardware prototype adapted to computational features of the reconstruction method and a recovery scheme successfully addressing the challenges of the circuit implementation.”

Speedy reservoir computing
Researchers from the Ohio State University and Clarkson University found a quicker and more efficient way to predict the behavior of complex systems by improving an artificial neural network model called reservoir computing.

Originally developed in the early 2000s, reservoir computing can be used to forecast dynamical systems, such as weather, where one small change can have a large effects over time. The “butterfly effect,” in which a butterfly flapping its wings causes a storm later, is one well-known example.

In reservoir computing, data on a dynamical network is fed into a “reservoir” of randomly connected artificial neurons in a network.  The network produces useful output that the scientists can interpret and feed back into the network, building a more and more accurate forecast of how the system will evolve in the future.

For more complex situations and more accurate forecasts, more artificial neurons and more computing time and resources are needed. The team aimed to simplify the system. “We had mathematicians look at these networks and ask, ‘To what extent are all these pieces in the machinery really needed?’” said Daniel Gauthier, professor of physics at The Ohio State University.

They tested their concept on a forecasting task involving a weather system developed by Edward Lorenz. In one relatively simple simulation done on a desktop computer, the new system was 33 to 163 times faster than the current model.

But when the aim was for great accuracy in the forecast, the next-generation reservoir computing was about 1 million times faster.  And the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons, compared to the 4,000 needed by the current-generation model, Gauthier said.

“We can perform very complex information processing tasks in a fraction of the time using much less computer resources compared to what reservoir computing can currently do,” said Gauthier. “And reservoir computing was already a significant improvement on what was previously possible.”

A big factor in the improved speed is the need for less training data and time.

“For our next-generation reservoir computing, there is almost no warming time needed,” Gauthier said. “Currently, scientists have to put in 1,000 or 10,000 data points or more to warm it up.  And that’s all data that is lost, that is not needed for the actual work. We only have to put in one or two or three data points.”

On the forecasting side, less data is needed as well. In the Lorenz forecasting task, the researchers could get the same results using 400 data points as the current generation produced using 5,000 data points or more, depending on the accuracy desired.

The team plans to extend this work to tackle other difficult computing problems, such as forecasting fluid dynamics.

Closer is cooler
Researchers from University of Colorado Boulder investigated why some ultra-small heat sources cool down faster when they are packed closer together.

“Often, heat is a challenging consideration in designing electronics. You build a device then discover that it’s heating up faster than desired,” said Joshua Knobloch, postdoctoral research associate at JILA, a joint research institute between CU Boulder and the National Institute of Standards and Technology (NIST). “Our goal is to understand the fundamental physics involved so we can engineer future devices to efficiently manage the flow of heat.”

In previous experiments, physicists found that nano-scale bars of metal placed close together on a silicon base cooled more quickly when heated than when they are further apart. “They behaved very counterintuitively,” Knobloch said. “These nano-scale heat sources do not usually dissipate heat efficiently. But if you pack them close together, they cool down much more quickly.”

To find out why, the team used computer simulation to model a series of silicon bars, laid side by side like the slats in a train track, and heated them up. The level of detail in the simulation allowed them to observe the behavior of each atom in the model. “We were really pushing the limits of memory of the Summit Supercomputer at CU Boulder,” Knobloch added.

The researchers found that when the silicon bars were spaced apart, heat escaped in a predictable way, leaking from the bars to the material below and dissipating in every direction.

But when the bars were placed close together, strange behavior emerged. As the heat scattered, it effectively forced that energy to flow more intensely away from the sources. The team dubbed this phenomenon “directional thermal channeling.”

“This phenomenon increases the transport of heat down into the substrate and away from the heat sources,” Knobloch said.

“Heat flow involves very complex processes, making it hard to control,” Knobloch added. “But if we can understand how phonons behave on the small scale, then we can tailor their transport, allowing us to build more efficient devices.”



Leave a Reply


(Note: This name will be displayed publicly)