Efficiency Vs. Accuracy

Just because a design isn’t 100% accurate doesn’t mean it’s bad for all applications. In some cases it’s better.

popularity

By Barry Pangrle
If all you have is a hammer, everything looks like a nail.

I wrote an article, Power vs. Accuracy, last year that discussed tradeoffs between power and accuracy for different applications. It turns out that for a number of processing applications, if every bit isn’t perfect, the impact on the final result might not be all that great. Anyone performing financial analytical computations is probably now running for the exit screaming, but for a number of applications like image processing, a slight error now and then may be perfectly acceptable.

Just because we’re used to building machines for computations where every bit needs to be accurate doesn’t necessarily mean that we need to do this for all applications. The human brain doesn’t seem to have the ability to perform large amounts of double precision floating point operations per second, but it can still perform some pretty incredible feats nonetheless—and with a power budget so low that no electronic computing machine today attempting to replicate the same feats could come anywhere near it.

At the 2012 Computing Frontiers Conference, held in Cagliari, Italy, the paper that received the highest peer-review evaluation of all the Computing Frontiers submissions this year was, “Algorithmic Methodologies for Ultra-efficient Inexact Architectures for Sustaining Technology Scaling,” by Avinash Lingamneni and a team of international researchers from Rice University, USA, CSEM SA, Switzerland, EPFL, Switzerland, University of California at Berkeley, USA, and Nanayang Technological University, Singapore. Their Inexact Architecture enables significant gains in energy efficiency by allowing a certain level of errors in the computations.

At the 49th Design Automation Conference held last week, Krishna Palem & Avinash Lingamneni authored a paper titled, “What to Do About the End of Moore’s Law, Probably!”, that appeared in the special session on Probabilistic Embedded Computing. This paper provides a nice historical background for probabilistic computing and the tradeoffs in energy vs. accuracy and is to be followed by a longer version later this year in ACM Trans. on Embedded Computing Systems.

The set of three images below shows how an acceptable result can be achieved even with some error in the system. As was pointed out in the DAC paper, even allowing a small amount of error can lead to a substantial savings in energy. The use of inexact computing already has found its way into a practical application. A new chip based on this technology is targeted for the I-slate, an educational tablet. Local government officials in India’s Mahabubnagar District already plan to adopt 50,000 units. I expect to see more stories like this in the future as engineers strive to produce better energy-efficient designs.

barry1

This comparison shows frames produced with video-processing software on traditional processing elements (left), inexact processing hardware with a relative error of 0.54 percent (middle) and with a relative error of 7.58 percent (right). The inexact chips are smaller, faster and consume less energy. The chip that produced the frame with the most errors (right) is about 15 times more efficient in terms of speed, space and energy than the chip that produced the pristine image (left). Source: Rice University/CSEM/NTU

Barry2
In terms of speed, energy consumption and size, inexact computer chips like this prototype, are about 15 times more efficient than today’s microchips. Source: Avinash Lingamneni/Rice University/CSEM



Leave a Reply


(Note: This name will be displayed publicly)