Hardware Security Threat Rising

Rambus’ CTO zeroes in on why hardware is now a target and what’s driving this shift.

popularity

Martin Scott, senior vice president and CTO of Rambus, sat down with Semiconductor Engineering to talk about an increasing problem with security, what’s driving it, and why hardware is now part of the growing attack surface. What follows are excerpts of that conversation.

SE: With Meltdown and Spectre, the stakes have changed because the focus is not on using hardware to get to software. It’s attacking the hardware itself as the target. How has that changed things?

Scott: The general-purpose compute world tends to assume that the hardware is safe and secure. The mental picture that it’s no longer the case, even though experts in the field knew this already, is a big deal. It will drive interesting work in specialized processors that are particularly secure or fast, and it could spur a lot of new innovation. It’s both a big vulnerability that has issues and problems and challenges, but it’s also a wakeup call that can spur some really interesting innovation.

SE: Let’s dig into this. Typically, in the past, you would figure out how the hardware behaved, and then you would go in with a probe or some other attack mechanism and take over the software. With Meltdown and Spectre, we’re looking at direct, very complex hacking.

Scott: The world of attacks is very broad, and it’s getting broader. Attacking hardware requires a high-level of sophistication, and a high level of value to be gained from the effort. We always like to talk about the fact that with enough time and money, nothing is secure if you have smart people involved. But equally important, a lot of high-profile attacks were the result of some really simple things not done well. To me, the challenging and interesting part of the ever-connected world is the fact that it’s really easy to attack. If you think about the Mirai attack, it was a botnet exploit of open networking ports.

SE: That was the first IoT attack on that scale, right?

Scott: It was one of the first very visible, high-profile attacks. If you think about the number of new connected devices coming online, you’re going to see an increasing number of attacks in all regions of the world. Some of these will be sophisticated, but many of them will be very simple IT protocols and inattention to detail, where there are people taking advantage of Internet connectivity to scrub big data. When more and more of the world is on the Internet, its easier to find something to exploit It’s almost a bi-modal world, where you have some really sophisticated things, but you’ve also got an increasing attack surface where amateurs can go in and do bad things.

SE: There’s also more data coming in. Is that data getting more centralized so that you can mine it for more valuable data about one thing or one person?

Scott: One aspect of that question is whether those data streams are segmented privately. Increasingly, because there is high commercial financial value in a mashup of that data, you’re going to see a trend to want to combine those data streams. It’s valuable. There is more monetizable information, not just data for marketing, but to infer next steps in a person’s life. If you know what you’re buying, where you’re traveling, when you’re doing it, when you come home, then if you mash up all of that you may have a lot of personal data you may not want to share. Europe with GDPR regulations is going down a more advanced path in terms of regulatory responses to that, where a consumer has more expectations of permissioning that kind of data. It’s going to be a very interesting space to watch. And then there is a question of what is by law available, and what is available to hackers. One can address the majority of use cases, but mashups of data is a use case that we should all be thinking about in the long term.

SE: Is it now a question of who owns the rights to data?

Scott: Yes, absolutely. If you have access to a secure clock and a secure geo-location, and there’s an accelerometer in this device, there’s a lot of information in there. Who’s going to monetize it and who owns it isn’t clear.

SE: So what is the attack surface there? Is it hacking into the hardware, or just the data?

Scott: Attackers always will go to the easiest place. The more connected things are, the more kinds of devices that are available, the more opportunities to get in. That’s true even statistically. If you look at the mobile platform, that’s more mature and standardized, and has been in production long enough, it may be difficult to get into those devices. Applications and memory are tied down better. But if that device connects into a fob for your car, or water filter replacements for your refrigerator, or your home gateway, your wearables—somewhere there isn’t the same level of standardization or protocol compliance or security testing. That is the interesting and worrisome attack surface. It’s all this stuff you don’t think is very valuable, like can you get in through your toaster to drive off in your car?

SE: How do you develop a device that’s going to be around for 10 to 15 years, at which time the hackers will be much more sophisticated than today?

Scott: That’s a big problem. You turn over your smart phone every three to four years. Security gets better. But that’s not the case for industrial. So what do you do about it? The prospect of having connected, extended life devices means you have to be able to upgrade them in the field. And you have to be able to securely patch them and push new firmware or new encryption algorithms down through a secure pipe. It’s all of those kinds of things that are automatically handled in the smartphone world. It’s not easy to have a one-stop-shop, end-to-end updatable standard for a refrigerator or a car. There’s a massive opportunity to build in the capability in the beginning to be able to securely upgrade software, or do something as simple as revoke the permission of a device to jump on the Internet. If you authenticated and provisioned a device, you’re basically saying, ‘You are who I think you are, but I’m going to watch you.’ And it may be a case where you see strange behavior and you take action. It’s important for those extended-life devices to revoke that authority to connect.

SE: There seems to be a lot more awareness of hardware back doors than in the past. A few years ago, the military was worried about this but no one else was. Now a lot more people are viewing this as a real possibility.

Scott: Yes, it is.

SE: So what do we do about this? Can we effectively build total security into a device for a price people are willing to pay, or is this better as an add-on service?

Scott: The issue here is who has the most to lose, and who is willing to pay for the benefit of more security. It’s usually the case that the entity with the most liability needs to have the budget to mitigate those risks. If you approach a high-volume, margin-challenged chipmaker, security looks like an extra cost. In reality, their liability may not extend beyond manufacturing, package, test and ship to the OEM. They have to warranty functionality, but not loss of data or security—at least historically. If you talk to someone who has developed a chipset, whose value proposition depends on that data being used for a safety-critical or financial decision, then those ecosystems have to partition budgets and be willing to pay for security. But rather than unit costs going up a lot, if they can have a subscription model to increase the security of connectivity, that’s a reasonable alternative. We’re going to see more subscription-like business models for ongoing security because it’s not a question of something being secure or not secure. It’s a continuum that isn’t static. And for an extended-life device, there’s an ongoing relationship with the safety of that device. It’s not a check-box and it’s deployed. You’re talking to it for a long time. If you’re paying for it on an ongoing basis, it’s less painful.

SE: So just how big is this problem?

Scott: It’s daunting to to comprehend the complexity of all of the simple stuff, as well as the really hard stuff. This involves amateurs with little funding all the way up to nation states. The levels of countermeasures and protocol protection spans a very wide range. The concern about invasive insider attacks has gone up a lot, as well. If you look at hyperscale data centers, the scrutiny over privileged access and background checks and ongoing behavior analysis has changed. The severity of attacks can be great, in part because of the concentration of compute, networking, and storage in big data centers. But also, for non-financial gain, the potential for causing disruption is very high. We’re being asked to address secure data at rest in areas where ‘Networking Vendor X’ and ‘Server Vendor Y’ have access because they have badges and they’re going to swap out this line card or switch or stack of SSDs. The concern about making the environment fail-safe, assuming they’re not trusted, is different than it was 10 years ago.

SE: Ten years ago everyone thought data at rest was safe. It was only when it was in motion that people were concerned. Now it’s the underlying hardware that supports that data.

Scott: Yes, and you have to think about where a DRAM could dump into some NVM module and how mobile that physical device is.

SE: What’s happening on the payment side?

Scott: With money, you care that it goes where it’s supposed to go. The bank is incented not to lose your money, and you’re certainly incented to not lose your own money. It certainly requires scrutiny and attention to verifiable identities. A growing area of securing financial transactions has to do with an assumption that credentials in transit are not just possibly, but probably going to be intercepted. How might you mitigate the risk of having credentials stolen or borrowed for another financial transaction? One big area of technology we’ve worked on very hard is tokenization. It’s turning a personal credential that had value, because it could be tied to an individual or credit card number, and tokenizing it into an alphanumeric number that is meaningless because you don’t have an actual code. You assume that if that token is available to an adversary, they can’t do anything with that number; the token’s value is worthless. A major part of our business is providing gateways, connectors, and tokenization to financial services. Mobile payments drove a lot of that, but we’re getting a lot of inquiries about other kinds of payments, like tokenization for ACH (automated clearing house) transfers. Rather than just mobile payments, financial institutions are looking at tokenizing everything that might have value. We’re trying to broaden the footprint of tokenization because it’s a very straightforward way to reduce the security risk of credentials in transit.

SE: In the past, tokens have been painful to use. Is this automated?

Scott: Yes. This isn’t about developing a local second factor to authenticate. This is unbeknownst to you that your credentials are being changed as they’re going back and forth. It’s not an ID issue. It’s an obfuscation of your credentials, to help securely protect your data and information. It’s automatic, and it’s super-fast because Rambus isn’t adding significant latency, and it’s easy to deploy.

SE: We have all of these capabilities and knowledge about what can go wrong. What impact does that have on power and performance?

Scott: It depends. Preventing the most severe side channel attacks require some additional power, some additional area and computation, in order to ensure that even with millions upon millions of cycles of attacks you don’t leak information. You can’t escape that. But security solutions are a continuum. The challenge is striking a balance between the risk/reward of an adversary. What is it worth? If I have a single, bounded voice machine, I’m probably okay accepting the risk of reverse engineering that platform. That could be an example where it’s not worth spending money to make it more secure. There may be other situations where, depending upon the assets or what something is connected to, it’s worth more. A lot of the security business is about matching the risk with the reward. There’s no free lunch in engineering. It’s another optimization. Our customers expect us to make security recommendations and architecture direction decisions that are as important as the clock cycle, throughput and bandwidth. It’s another architectural optimization parameter, rather than in the past where you added security when the device is done. That could be software, firmware or packaging. There’s been a sea of change in the past five years, where security has become part of the architectural design.

SE: We used to think about security risks being primarily at the seams or interfaces. Is that still true, or is it now everywhere?

Scott: It’s still true, but it’s also everywhere. We’ll have big front doors with strong locks, but you have to assume that the bad guys will still get in. The in-situ monitoring, whether it’s networking or at the chip level, will be very interesting. If you combine that with some learning algorithms, devices will be more secure. You’ll never get to 100% security. The best you can do is always make the attack surface as small as possible, use the best counter measures against whatever the most likely threats are, and over time make sure that you’re constantly aware of what’s going on so you can take action.



Leave a Reply


(Note: This name will be displayed publicly)