The Security Penalty

Just building systems based on speed now comes with a well-publicized risk.

popularity

It’s not clear if Meltdown, Spectre and Foreshadow caused actual security breaches, but they did prompt big processor vendors like Intel, Arm, AMD and IBM to fix these vulnerabilities before they were made public by Google’s Project Zero.

While all of this may make data center managers and consumers feel better in one respect, it has created a level of panic of a different sort. For decades, the primary job of chip architects was to build the fastest processors possible, and over the past 15 years that has included big improvements in performance per watt. But as the power/performance benefits of Moore’s Law scaling begin to dwindle—the most recent estimates are a maximum of 20% power/performance improvement at each new node after 10/7nm—the cost of adding in security to eliminate security issues could impact that number further. Prior to 40nm, performance improvements for each new node shrink were in 30% to 35% range.

This may seem inconsequential for desktop or mobile apps. A slightly slower word-processing or spreadsheet application is an annoyance, but consumers are as likely to blame that on a barrage of software patches. Inside of data centers, however, performance has a direct economic impact. Any loss of performance in the cloud needs to be supplemented by additional servers, which require power, cooling and floorspace. A 10% loss in performance has a measurable effect on profitability.

There are other ways to improve overall performance, of course. For AI and deep learning, new chips are being developed that promise 100X performance increases. Whether that actually happens remains to be seen. The problem with AI/DL/ML is that the algorithms are changing so fast that building a chip to fully optimize training and inferencing is difficult. And while these performance improvements may provide a big boost over training data sets with GPUs, the reality is that everyone already has GPUs, and there are now open-source tools available for those GPUs which will make displacing them much more difficult.

But there’s another piece of this equation. CPUs and GPUs are part of a larger compute infrastructure. There are an increasing number of non-x86 architectures being used for accelerating different data types, each connected to small memories so that data doesn’t have to move very far. No one is sure at this point whether accelerators are a viable vector for cyber attacks. That may require additional work by white-hat hackers such as Project Zero.

And putting this in perspective, this is just the first layer of what ultimately will be multi-layer system-wide security. Within this space there are active and passive security options. Active security features are the most effective, but they require the most power. Passive security, such as storing keys in tamper-resistant portions of of a chip, have no perceptible performance penalty, but they won’t protect a device from a side-channel attack. And no single approach will solve all problems, which means we are likely to see multiple approaches in a variety of devices.

Chip architectures are changing for a wide variety of devices, with new approaches to reading data in memories and processing more data per compute cycle. But complexity comes at a cost, and security features will need to be considered in all of these approaches. So after years of steady power/performance improvements, the next round of improvements may look less remarkable—even if they do help people sleep better at night.



Leave a Reply


(Note: This name will be displayed publicly)