How Secure Are Low-Power Techniques?

Side-channel attacks make security a challenge.


As a chip designer, you and your team have done the best job possible to optimize power in your SoC, likely utilizing all of the low power techniques at your disposal. The chip tapes out, gets implemented into systems and it’s a success! Then the call comes that your chip has been hacked within the system it’s in and you and your team are left shaking your heads in wonder.

I can imagine this scenario has played out more times than are being reported, maybe for the sheer fact that a solution is not clear. I learned that this can happen while reporting on low power processing in the datacenter and speaking with Bernard Murphy, chief technical officer at Atrenta on the subject.

He admitted, rather, reminded that you can never actually solve the problem of security – you can just make it harder for the hackers to breach the security. One such company working on this is Inria, Murphy explained, which provides software encryption to harden the system by balancing both timing and potential power consumption. “Instead of having this, whatever you’re doing, you try to flatten the profile”

That’s great, but guess what? It increases power.

The memory side is also interesting, he said. “How do you defend against timing of power attacks and memory? You’re accessing and the signal is leaking out whether you get a cache hit or a cache miss. One way to address that is to access the memory in a more unpredictable way so you can make calls to the memory that will cause cache misses to happen somewhat uniformly.”

Guess what? That drives up power, too. Interestingly, in this way, power and security act against each other. Who saw that coming?

Thankfully, there is a lot of activity in the software realm primarily at the academic level, but whether it is translating into hardware today remains to be seen.

As noted above, software fixes amplify the power problem so it makes sense to look at hardware-based solutions. In this area, Cryptography Research, a division of Rambus, advises on ways to either flatten the profile or add noise, Murphy mentioned. However, adding noise to the signal, like analog noise generators, but it’s an option.

For cache memory there’s some academic research out of Princeton by Ruby Lee who has done some work on how to build different cache memories that would avoid some of these problems. One technique is to randomize access where the access is hidden behind some randomization, which can help throw hackers off the trail.

Murphy added that there is another point to consider. “Another security hole especially with the cloud is that you have multiple processes running on virtual machines. When you are running on virtual machines you can have an attacker process and the victim process running on the same physical machine in different virtual machines. The problem with that is that, again, an attacker process can monitor through cache timing what is happening on the victim machine so even though it’s theoretically wonderfully isolated and you can’t share files or memory space or anything else, [a hacker] can still look at timing and the cache. While that seems like an incredibly thin thread to hang any kind of analysis on, it actually is sufficient. A hacker can deduce encryption keys from it.”


Fortunately some hardware fixes have been proposed. One is to do partition-locked caches so you have a cache in which Process A and Process B cannot evict information in each other’s cache and they can’t override information in each other’s cache. Another is a random-permutation cache where information is loaded into the cache in a randomly determined way so it’s difficult to track what’s changing in the cache.

I’d love to hear some real-world examples of these security issues. Please chime in below.

~Ann Steffora Mutschler