Power/Performance Bits: April 16

Faster CNN training; efficient flash for data centers; static negative capacitor.

popularity

Faster CNN training
Researchers at North Carolina State University developed a technique that reduces training time for deep learning networks by more than 60% without sacrificing accuracy.

Convolutional neural networks (CNN) divide images into blocks, which are then run through a series of computational filters. In training, this needs to be repeated for the thousands to millions of images in a data set – a process that may then be repeated hundreds of times to fine-tune the network.

“One of the biggest challenges facing the development of new AI tools is the amount of time and computing power it takes to train deep learning networks to identify and respond to the data patterns that are relevant to their applications,” said Xipeng Shen, a professor of computer science at NC State. “We’ve come up with a way to expedite that process, which we call Adaptive Deep Reuse. We have demonstrated that it can reduce training times by up to 69% without accuracy loss.”

The team’s idea was to reduce the training time by identifying blocks that are similar to one another, such as a patch of blue sky, which might repeat in the same image or in other images in the data set. By recognizing these similar data chunks, a deep learning network could apply filters to one chunk of data and apply the results to all of the similar chunks of data in the same set, saving computing power.

“We were not only able to demonstrate that these similarities exist, but that we can find these similarities for intermediate results at every step of the process,” said Lin Ning, a Ph.D. student at NC State. “And we were able to maximize this efficiency by applying a method called locality sensitive hashing.”

To make the most use of this method, the researchers began using relatively large chunks of data using a relatively low threshold for determining similarity. In repeated training cycles with the same data set, the blocks get smaller and the similarity threshold more stringent, improving the deep learning network’s accuracy. To automate this, the researchers designed an adaptive algorithm that implements these incremental changes during the training process.

To evaluate their new technique, the researchers tested it using three deep learning networks and data sets that are widely used as testbeds by deep learning researchers: CifarNet using Cifar10; AlexNet using ImageNet; and VGG-19 using ImageNet.

Adaptive Deep Reuse cut training time for AlexNet by 69%, for VGG-19 by 68%, and for CifarNet by 63%, all without accuracy loss. The team says the larger the network, the more Adaptive Deep Reuse is able to reduce training times. They hope to work with industry and research partners on further development.

More efficient flash for data centers
Researchers at MIT and Daegu Gyeongbuk Institute of Science and Technology (DGIST) designed a flash storage system to reduce both the energy and physical space required by storage servers in data center by half. Called LightStore, the system modifies SSDs to connect directly to a data center’s network without the need for other components and supports computationally simpler and more efficient data-storage operations.

In experiments, the researchers found a cluster of four LightStore units, called storage nodes, ran twice as efficiently as traditional storage servers, measured by the power consumption needed to field data requests. In particular, computationally intensive random write operations were nearly eight times more efficient. The cluster also required less than half the physical space occupied by existing servers.

The team argues that a major issue with current data centers is that the architecture hasn’t changed to accommodate the shift from hard disks to flash storage. “People just plugged flash into where the hard disks used to be, without changing anything else,” said Chanwoo Chung, a graduate student in the Department of Electrical Engineering and Computer Science at MIT. “If you can just connect flash drives directly to a network, you won’t need these expensive storage servers at all.”

“We are replacing this architecture with a simpler, cheaper storage solution … that’s going to take half as much space and half the power, yet provide the same throughput capacity performance,” said Arvind, professor of Computer Science Engineering and a researcher in the Computer Science and Artificial Intelligence Laboratory at MIT. “That will help you in operational expenditure, as it consumes less power, and capital expenditure, because energy savings in data centers translate directly to money savings.”

To achieve this, the researchers modified SSDs to be accessed using key-value pairs. This requires a flash translation layer that manages and moves around data. The researchers used certain data-structuring techniques to run this flash management software using only a fraction of computing power and then offloaded it entirely onto a tiny circuit in the flash drive.

Additional LightStore software uses data-structuring techniques to efficiently process key-value pair requests, converting a traditional flash drive into a key-value drive without changing the architecture.

To ensure app servers could access data in LightStore nodes, the researchers designed computationally light adapter software, which translates all user requests from app services into key-value pairs. The adapters use mathematical functions to convert information about the requested data, such as commands from the specific protocols and identification numbers of the app server, into a key. It then sends that key to the appropriate LightStore node, which finds and releases the paired data. Because this software is computationally simpler, it can be installed directly onto app servers.

“Whatever data you access, we do some translation that tells me the key and the value associated with it. In doing so, I’m also taking some complexity away from the storage servers,” Arvind said.

Finally, the team found that adding LightStore nodes to a cluster scales linearly with data throughput, with four nodes surpassing throughput levels for a comparable number of SSDs.

Static negative capacitor
Scientists at Argonne National Laboratory, University of Picardie in France, and Southern Federal University in Russia created a permanent static negative capacitor which could be used to redistribute electricity on a small scale.

“The objective is to get electricity where it is needed while using as little as possible in a controlled, static regime,” said Valerii Vinokur, an Argonne materials scientist.

Previous designs for negative capacitors worked on a temporary, transient basis, but the new device is steady-state and reversible.

By pairing a negative capacitor in series with a positive capacitor, the team found they could locally increase the voltage on the positive capacitor to a point higher than the total system voltage. In this way, they could distribute electricity to regions of a circuit requiring higher voltage while operating the entire circuit at lower voltage.

Unlike traditional capacitors, where the electric voltage is proportional to the stored electrical charge, in negative capacitors increasing the amount of charge decreases the voltage. But because the negative capacitor is a part of a larger circuit, this does not violate conservation of energy.


This image shows the movement of the domain wall (a-c and b-d) in a capacitor when a charge is added to one side (c). The resulting redistribution of the domain wall causes a negative capacitive effect. (Source: Argonne National Laboratory.)

A key component of the negative capacitor design is a ferroelectric filling. “In a ferroelectric nanoparticle, on one surface you will have a positive charge, and at the other surface you will have negative charges,” Vinokur said. “This creates electric fields that try to depolarize the material.”

By splitting a nanoparticle into two equal ferroelectric domains of opposite polarization, separated by a boundary called a domain wall, the researchers were able to minimize the effect of the total depolarizing electric field. Then, by adding charge to one of the ferroelectric domains, they shifted the position of the domain wall between them.

Because of the cylindrical nature of the nanoparticle, the domain wall began to shrink, causing it to displace beyond the new electric equilibrium point, the researchers said. “Essentially, you can think of the domain wall like a fully extended spring,” said Igor Lukyanchuk, a scientist at the University of Picardie. “When the domain wall displaces to one side because of the charge imbalance, the spring relaxes, and the released elastic energy propels it further than expected. This effect creates the static negative capacitance.”



Leave a Reply


(Note: This name will be displayed publicly)