Security: Losses Outpace Gains

Complexity, new and highly connected technology, and more valuable data are making it harder to keep out hackers.

popularity

Paul Kocher, chief scientist in Rambus’ Cryptography Research Division, sat down with Semiconductor Engineering to discuss the new threats to security, artificial intelligence and machine learning, and how to engineer a secure system. What follows are excerpts of that conversation.

Screen Shot 2017-04-05 at 12.36.53 PMSE: Where are we with security? It seems that rather than getting better, things have actually gotten worse over the past year. Where are the problems and how do we close up some of these holes?

Kocher: At a high level, if you want to run some complex set of applications, run huge amounts of software and keep it from being compromised from adversaries, this is an area where we’re losing ground. And it’s one where we’ve been losing ground for a long time. It’s hard to get a sense of where things are going because the press is missing both the positive and negative information. Nobody writes an article announcing which system is not hacked this week. On the other side, most attacks don’t get detected, which is the number one objective if you’re an adversary. If you’ve recognized you’ve been breached, the attacker already messed up. The ones that get detected are the ones that either have business models that necessitate detection, like financial fraud, or they’re amateurish and unlucky or working on such a scale that they’ve can’t hide. If you look at what gets caught, there’s an awful lot that clearly is not being reported on.

SE: But this is more than just a reporting issue, right?

Kocher: Yes, but there aren’t good metrics. You can measure the speed of a clock by sticking an oscilloscope on it and seeing how many gigahertz it’s running at. People love those kinds of problems where you have a clear, measurable and easily observable sense of whether you’re making progress or not. In security, you don’t have that. It’s fundamentally this opaque thing, where you ask 10 different practitioners what you should do and you get 11 different answers. It’s kind of a dark art in the way that medicine was before it was viewed as a science. That fundamentally makes progress really difficult because you can’t know whether that choice that you took was even moving you in the right direction or not.

SE: So where are you seeing progress?

Kocher: People are finally realizing that security solutions are not cost-prohibitive for transistors. If you look at what it costs for a small microprocessor on a die, then you’re in the order of magnitude of a penny, depending on what logic you add around it. Often it costs you nothing if you don’t actually change the number of chips that you get on the reticle itself. If you have some corner and it’s going to be otherwise unused, and you stick some logic there—especially if it doesn’t have tight timing constraints—you can often just squeeze it in somewhere on a chip and it ultimately won’t cost anything. So we are seeing a lot of that sort of thing happening.

SE: Where?

Kocher: If you look at Andes’ processors, they’ve got some security processors that complement the big CPUs. If you look at Intel, they’ve a number of these, as well. There are a series of things that have started to happen in these spaces, but there need to be way, way more. There’s also a problem with people who have been trained to optimize for performance and efficiency. To go and add something that adds neither performance or efficiency is hard to do mentally. If you look at other areas like structural engineering, where you put more steel than is needed to keep the building standing, that is entirely natural for a structural engineer. If you told structural engineers that you wanted to make something with tens of thousands of components, any one of which would cause the whole building to collapse if it didn’t work perfectly, they would say you’re nuts. We’ve been doing this for so long, both hardware and software engineering, that it’s hard to change the mindset to say, ‘I want to spend $10,000 making my product slower and more expensive to build because it is safer.’ Yet we’ve been doing this in lots of other industries for many years.

SE: A lot of that has been regulated from the outside down, whereas, the chip industry for the most part has never had to deal with this.

Kocher: It may take regulation or other market forces to change behavior. It certainly is true that if you look at industries like aviation, pharmaceuticals, and to a certain degree medicine, change has occurred and regulators have played a role in that. It’s still a little early in the security space to really apply a lot of regulation because we don’t exactly know what the best solutions to a problem are. When you reach the point that everybody should put a seatbelt in a car and you know what seatbelts look like, it’s fairly easy for a regulator to say, ‘Okay, all cars will now be equipped with seatbelts in them.’ There’s still a lot of experimentation right now. If you regulate a specific approach, it may not be the best one. That is part of what makes things messy.

SE: What could regulators do?

Kocher: They could regulate what your expectations should be for a product if it claims to be secure, because right now most products sold have security bugs in them that make them not secure. Trying to figure out what information should be delivered to customers about what was done probably could help. It would certainly make the security easier if and when it becomes highly regulated.

SE: One of your areas of expertise is cryptography. What’s changing there?

Kocher: Cryptography is the one piece of security that people still really expect to work really reliably. For the most parts, it’s been able to deliver on that promise. People typically know quite a long time in advance if there are little cracks in the defenses of an algorithm. Right now, one of the areas of research is building public key systems that are resistant to quantum computers, which are themselves a decade or more off in terms of actually being able to scale to the point they are a threat to our current cryptographic constructions. The RSA algorithms and the elliptic curve cryptography standard, which are the most widely used of the public key algorithms like Diffie-Hellman, all could be broken if a quantum computer of sufficient size and reliability came along. It’s not an immediate threat, and if you look at a medical analogy, what’s causing problems today are implementation bugs. Those pose a dire and immediate threat to security. From a resource perspective, building resistance into products before you get the bugs out is really not a very high priority. But from a research perspective, it’s a really neat set of new mathematic and engineering problems to come up with sufficient algorithms that are resistant to these hypothesized quantum computers.

SE: So what’s next?

Kocher: There are some pretty good proposals that are currently on the table that are being studied, and there will be standardization process for those. In a lot of ways, cryptography is comparable to bricks. You need good bricks if you’re going to build a building, but there’s a lot more to architecture than just the bricks. You’re trying to figure out how you take algorithms and put them into protocols and solve a user’s actual security problem and how you implement those protocols in a way that’s correct. And then you have to put that correct implementation into a system in a way that the bugs and other parts of the system don’t compromise the security of the protocol. It’s an onion with many layers, and the crypto is often at the center of all of that. The algorithms themselves are in many cases are relatively trivial part of what you need in order to solve the ultimate business privacy problem that you are focused on.

SE: Machine learning and AI have emerged very quickly, probably due to Tesla’s push into autonomous driving and the competitive race that followed. Do those raise any red flags in terms of security that were not there before, or is it just a continuation of something that was already in play?

Kocher: Can AI, for example, be developed in ways that could recognize vulnerabilities in widely deployed software—perhaps even instruct an exploit and turn what is currently a time-consuming human manual process into something much more accessible and spreads itself everywhere? I don’t think that is going to happen. However, it’s something where the manual processes now involved in attacking things probably will get assisted through AI-based tools that can do some level of software analysis. On the defense side, there’s an open question about whether AI can be taught to understand properties of software-hardware design and tell us useful things about them. For example, whether or not the design is one that might have certain categories of bugs in it. That’s one of the things I’m actually going to be spending some time looking at. There’s an open question about how far AI can go there. The current AI applications tend to be ones where you’re optimizing some kind of a search space or you have a relatively straight forward set of problems with very large amounts of training data. But understanding complex logic doesn’t fit very well into that mold. There are clearly some advances in AI that are needed for that to happen. The third category of questions around AI is how do we deal with these systems where they’re making judgments that are critical for safety or that are critical for us as humans, but we don’t actually understand what the decision-making process of the AI is. Driving is an example where if you don’t really understand how the AI know what a stop sign looks like, there’s a question of can we trust that with our lives.

SE: What is the right answer?

Kocher: It depends on whether there are massive global failure modes or whether the failure modes are relatively isolated. If your AI occasionally misses a stop sign and perhaps even kills the occupants of a car, it may still be better to have an AI-based driver rather than a human being that stares at their cell phone with limited vision, or who is tired, and might be much more likely to crash. We have fairly good data already about the safety profile, for example, of automobiles. If we can make them orders of magnitudes safer, even if they’re not perfect, it’s still probably an improvement. The question of whether there are global massive failure modes where all the cars crash at the same time—that ends up being a very different question. Some of those come back to conventional software security problems, like how you get updates delivered securely and how you ensure training sets for your AI haven’t been tampered with by someone who is injecting malicious data into them. The problems we’re going to see in the automotive space may involve more challenges around things like that than specifically the vision analysis and interpretation systems that are the first applications of AI there.

SE: AI is not even close to the evil HAL in “2001: A Space Odyssey,” but there is some level of machine learning and inferencing involved in these systems. How do you see that evolving?

Kocher: There are things in the gray area. In a game like Go, or poker, the rules are set. You can run simulations, but those involve the real world to collect arbitrary amounts of data for your AI to learn from. With a problem like recognizing images or handwriting recognition, it doesn’t make a lot of sense to have the systems generating images that it tries to analyze itself because it’s going to go off in some world that deviates from our real world. Underneath, the software techniques are pretty similar. It’s just a question of whether you can use your own system to generate the raw data that helps you get better or whether you have to get that data from real world testing.

SE: Which brings up a question on the security side—is there a difference if a system learns by itself?

Kocher: There’s a classic attack that has been done a bunch of times against systems that have been trained in machine learning. If you have a system designed to detect cars and you give it a picture of a cat, it’s going detect that it’s not a car. But if you make a small change to the picture—imperceptible things like changing a few pixels, and then ask, ‘Is it closer to being a car or further away from being a car,’ and if it’s closer you keep the change and if it’s farther away, you don’t. Then you iterate it, making little tiny imperceptible changes, keeping only the ones that make it look more like a car, you would end up with a picture of a cat that the system thinks is a car. A human looking at it would say it is obviously a cat, but the AI doesn’t really understand cat necessarily in the way that we do. It recognizes something that is correlated to being cat-like, and it’s then good enough for non-maliciously generated input. But if you start doing things like intentionally producing things to trick it, the system can be susceptible. It will be a long time before AI can be useful in an environment, for example, where someone can manufacture input file that must be correctly characterized or some kind of consequence occurs. There will certainly be some problems there, although they’ll be small compared to a conventional computer security crisis that we’re struggling with around non-AI based systems, which will also affect AI-based systems. If you’re running on some cloud compute machine that’s compromised, it doesn’t matter whether your algorithm is AI-based or not. You’re still compromised.

SE: The security picture comprises lots of smaller pieces. AI is one component. The Mirai botnet was another piece, where little things you don’t expect to be important add up to something big, like the first massive IoT attack. You also have the standard stuff that has been going on for a while, hacking into a server to get financial data. Can they be addressed on a macro level, considering everything is now connected, or does everything have to be addressed separately?

Kocher: There’s a point where you’ve got a specific vulnerability in a specific product, and those things end up being treated like a specific gunshot wound that might be treated in an ER. A patient comes in and people do the best to deal with whatever you got there, minimizing the consequences. There are a lot of things that are common root causes to different problems that are often many steps before the actual product was shipped to a customer. Those need a lot of foundational work so that we don’t keep having these problems, and they don’t keep getting exponentially more common and serious. That includes a lot of the hardware. Right now, the standard assumptions that hardware people make is that they will faithfully execute instructions, but it’s the job of the software people to make sure there are no bugs in the software. That’s a very convenient assumption to make if you are a hardware engineer because you can push the problem off to somebody else. It’s also an incorrect assumption. The software that we have today is buggy, so if you still build something that depends on huge amounts of bug-free software, you’re already ensuring that security is going to fail. Instead, you should be asking questions like, ‘How do I make it so that there can be pieces of software that can run securely even if other pieces of software are compromised?’ Critical parts of the software, like the network security piece, need to be done in a way that can function by itself. For example, IoT devices can be turned into DDoS blasters. But if they were configured in a way where they could only send image data out to somebody who connected into them first, and that data needed to be encrypted correctly with a well-verified network security stack coupled with redundancy, you wouldn’t be seeing this kind of problem. Starting at the hardware architecture level, we need to develop systems survive having at parts of them being buggy without that resulting in serious or catastrophic consequences.

SE: This is a fundamental rethinking and getting rid of some basic assumptions on how to secure a piece of hardware or network. The usual approach is to make sure there is only one way in.

Kocher: We’re in a world where all of our popular devices have software complexities that we don’t necessarily trust, with a whole bunch of different types of functionality with security properties that co-reside on the same device. If you think of your mobile phone, TV, car, or any kind of multiple-function electronic device, they’re largely built where security is one bug, or maybe two or three, away from getting compromised. And each of the systems that might have bugs in them are riddled with bugs. The adversary process is finding the vulnerability, develop and exploit, and you can take over everything running on the device. We need to get much more durable and reliable mechanisms so, for example, the video game you’re playing doesn’t have the ability to steal your banking credentials. It may well involve using separate hardware. If you think about the design constraints for the graphics processor, it’s performance-driven, but not so much security-driven. In contrast, the thing that is going to manage the credentials that authorize wire transfers really can be slow and you can build that using 1980s technology and it would be just fine. The design constraints there are different. Going back to the hardware question, figuring out how to get high assurance pieces of chips to exist alongside, for example, high-performance silicon is something where chip companies are ultimately going to have to do a lot better job if people making products using their silicon are going to be successful on security front.

SE: So stepping back up to 60,000 feet, are we worse off than a couple of years ago?

Kocher: Defenses have improved and things have gotten better, but they haven’t gotten better as fast as the attackers have gotten more powerful. There are really three separate exponential curves that are fueling the attacks. First, there is an exponential increase in the complexity of devices, which correspond directly to an increase in the number of bugs in those devices. If I make something 10X more complex, I’m going to get at least 10X as many bugs if my software and design quality stay the same. Quality has improved if you measure it on a per-line-of-code basis, but the improvement hasn’t kept up with the increasing complexity. If slows down greatly, that actually may start letting security catch up. But over the last decade, Moore’s Law has moved faster than our ability to debug and improve the quality. So we have one exponential place where we are losing ground relative to complexity.

SE: What are the other two?

Kocher: There’s a very rapid increase in the number of devices. If you go back 15 years, your corporate network was basically a bunch of PCs. If you look at more a more modern network, you have a huge heterogeneous mix of different kinds of devices that are being connected to it. The ability to actually dedicate resources to understand it and maintain security of each of those is really limited, both at the end user layer as well as the manufacturer side. There’s no good solution to that. The third area where we’ve lost ground is just the value of the data on systems has gone up a lot. That makes things better for attackers. Trends that are good for creating functionality are just swamping our ability to defend systems and keep them secure. I don’t think that’s going to dramatically change over the next five years. I don’t see any silver bullet coming along that will improve security in such a dramatic and rapid way even to make things equivalent to where they are now. I make the prediction every year that the following 12 months will have more spectacular and worse security breaches than the previous 12 months. That’s been a fairly safe bet over the past decade plus.

Related Stories
IoT Security Risks Grow
Experts at the table, part 3: Why existing standards are insufficient; different strategies for securing connected devices; the widening impact of cost control.
IoT Security Risks Grow
Experts at the table, part 2: Mirai, Shodan, and where the holes are in security; establishing a chain of trust from a solid root; how to future-proof security.
Uncovering Unintended Behavior
First of two parts. Does your design contain a Trojan? Most people would never know, and do not have the ability to find the answer.
Chaos, Progress In Mobile Payment Security
Rapid transitions have stalled some development efforts, limited others, but improved security is on the way.



Leave a Reply


(Note: This name will be displayed publicly)