Research Bits: April 19

Processor power prediction; processing-in-memory; fast-charging challenges.


Processor power prediction
Researchers from Duke University, Arm Research, and Texas A&M University developed an AI method for predicting the power consumption of a processor, returning results more than a trillion times per second while consuming very little power itself.

“This is an intensively studied problem that has traditionally relied on extra circuitry to address,” said Zhiyao Xie, a PhD candidate at Duke. “But our approach runs directly on the microprocessor in the background, which opens many new opportunities. I think that’s why people are excited about it.”

The approach, called APOLLO, uses an AI algorithm to identify and select just 100 of a processor’s millions of signals that correlate most closely with its power consumption. It then builds a power consumption model off of those 100 signals and monitors them to predict the entire chip’s performance in real-time.

“APOLLO approaches an ideal power estimation algorithm that is both accurate and fast and can easily be built into a processing core at a low power cost,” Xie said. “And because it can be used in any type of processing unit, it could become a common component in future chip design.”

In addition to monitoring power consumption, the researchers said it could be used as a tool to optimize processor designs.

“After the AI selects its 100 signals, you can look at the algorithm and see what they are,” Xie said. “A lot of the selections make intuitive sense, but even if they don’t, they can provide feedback to designers by informing them which processes are most strongly correlated with power consumption and performance.”

APOLLO has been prototyped on Arm Neoverse N1 and Cortex-A77 microprocessors.

Less analog-digital conversion for processing-in-memory
Researchers from Washington University in St. Louis, Shanghai Jiao Tong University, Chinese Academy of Sciences, and Chinese University of Hong Kong designed a new processing-in-memory (PIM) circuit that uses neural approximators to reduce the amount of analog information that needs to be converted to digital.

“Computing challenges today are data-intensive,” said Xuan “Silvia” Zhang, associate professor in the Department of Electrical & Systems Engineering at Washington University in St. Louis. “We need to crunch tons of data, which creates a performance bottleneck at the interface of the processor and the memory.”

The team created a resistive random-access memory PIM, or RRAM-PIM. “In resistive memory, you do not have to translate to digital, or binary. You can remain in the analog domain. If you need to add, you connect two currents,” Zhang said. “If you need to multiply, you can tweak the value of the resistor.”

However, RRAM-PIM hits a bottleneck when the information does have to be converted to digital. To reduce this, the team added a neural approximator. “A neural approximator is built upon a neural network that can approximate arbitrary functions,” Zhang said.

In the RRAM-PIM architecture, once the resistors in a crossbar array have done their calculations, the answers are translated into a digital format. What that means in practice is adding up the results from each column of resistors on a circuit. Each column produces a partial result. Each of those must be converted to digital, an energy-intensive operation.

The neural approximator makes the process more efficient by performing multiple calculations down columns, across columns, or in whichever way is most efficient. This leads to fewer ADCs and increased computing efficiency, the researchers said.

“No matter how many analog partial sums generated by the RRAM crossbar array columns — 18 or 64 or 128 — we just need one analog to digital conversion,” said Weidong Cao, a postdoctoral research associate at Washington University in St. Louis. “We used hardware implementation to achieve the theoretical low bound.”

The researchers say the approach could have big benefits for large-scale PIM computers.

Fast-charging challenges
Researchers from Argonne National Laboratory and University of Illinois at Urbana-Champaign identified some of the problems that occur when batteries charge too quickly, hindering battery lifetimes for things such as fast-charging electric vehicles.

Lithium-ion batteries commonly use an anode made out of graphite. The process of lithium ions inserting themselves in the anode is called intercalation. When a battery is charged too quickly, instead of intercalation the lithium ions tend to aggregate on top of the anode surface, creating a plating effect.

“Plating is one of the main causes of impaired battery performance during fast charging,” said Daniel Abraham, an Argonne battery scientist. ​“As we charged the battery quickly, we found that in addition to the plating on the anode surface there was a build up of reaction products inside the electrode pores.” As a result, the anode itself undergoes some degree of irreversible expansion, impairing battery performance.

The researchers used scanning electron nanodiffraction to observe the battery. They found that at the atomic level, the lattice of graphite atoms at the particle edges becomes distorted because of the repeated fast charging, hindering the intercalation process. ​“Basically, what we see is that the atomic network in the graphite becomes warped, and this prevents lithium ions from finding their ​‘home’ inside the particles — instead, they plate on the particles,” Abraham said.

“The faster we charge our battery, the more atomically disordered the anode will become, which will ultimately prevent the lithium ions from being able to move back and forth,” Abraham said. ​“The key is to find ways to either prevent this loss of organization or to somehow modify the graphite particles so that the lithium ions can intercalate more efficiently.”

Leave a Reply

(Note: This name will be displayed publicly)