IoT Device Security Makes Slow Progress

Experts at the Table: While attention is being paid to security in IoT devices, still more must be done.

popularity

Semiconductor Engineering sat down with Chris Jones, vice president of marketing at Codasip; Martin Croome, vice president of business development at GreenWaves Technologies; Kevin McDermott, vice president of marketing at Imperas; Scot Morrison, general manager, embedded platform technology at Mentor, a Siemens Business; Lauri Koskinen, CTO at Minima; and Mike Borza, principal security technologist at Synopsys. What follows are excerpts of that discussion.

SE: Security is an issue that is paramount in society today. This translates to pressure on the entire semiconductor industry to come up with solutions for privacy and security. With more IoT devices coming to market all the time, what is being done today to address security?

Top : C Jones, S Morrison, L Koskinen. Bottom: M Croome, M Borza, K McDermott

 

Borza: We’re better off this year than we were last year so that’s a good start. A lot of the new devices that are being delivered are starting to pay at least some attention to security. I’d argue that a lot of them could be doing more than they are, but at least, now the software is being designed with the idea that the data that device produce and consume needs to be secured. That there are privacy and confidentiality issues associated with it. and this is not universal, but at least more and more manufacturers are starting to build the devices with that notion.

Croome: In the past I worked in a company within the IoT gateway space, which is probably the actual area where the security thing has hit the most in terms of public stories, which have been about Linux-based devices, and edge devices like security cameras as well as gateway devices. I think the security on a sensor has been less of an issue. That doesn’t mean to say it won’t be an issue in the future, but it’s been less of an issue. There, the problem was just a complete lack of attention. If you look at those devices, they were shipped with default passwords that never got changed; they were shipped to people that were hanging them on to the public internet so that the actual devices could be visible. What’s worse was they were visibly identifiable. Therefore, once there was some kind of a flaw found in Linux, you could just go to a website and find out everyone who has had one of those devices and crack them. I think now that there’s a recognition of that, it’s not really rocket science to fix as long as you don’t take it too far.

In a previous job, I definitely had security officers that were all asking me about quantum computers and insisting on AES-256 [256 bit Advanced Encryption Standard] because they were going to get hacked imminently. And while that may well be an issue in the future, it’s really not an issue now. The issues now are really very basic: don’t have a default password; don’t let a device respond when it’s directly connected to the internet; make sure that you can upgrade with security patches when critical flaws are found.

From an edge perspective, we see a few different things on the security side. There’s the security of the code both from the perspective of what’s running but also from the perspective of the IP of the person because AI is trained networks and those networks have trained with data which takes a huge amount of money to collect. If you don’t protect the code then there’s a big problem there. In terms of best practice at the moment like secure boot, encrypted firmware, fuses, memory protection, we’ve got all of those basics covered and actually more than discussion we have internally is how paranoid do we get because people can get illogically paranoid about security.

SE: How about IoT security with the addition of AI features?

Morrison: It’s an interesting question. AI does not really affect the basic or “classical” security of devices (things like secure boot, secure onboarding to the cloud, secure communications, etc.), but it raises a new set of security issues due to the fact that AI applications are in some cases replacing humans in the control of machinery.  Autonomous driving is an extreme example, but the same will be true in factories, airplanes, and many other settings. Therefore, the integrity of the AI application itself, and the detection of malware that could affect the behavior of an AI controlled device, becomes even more important than it used to be when any detrimental effects were more locally contained to the device and therefore less severe.

SE: What are some of the things that we should be doing security-wise that we’re not doing yet?

Borza: There’s still too much exposure to software. Where things are weak is the secure boot systems and the update systems are weaker than they should be and there are still a lot of devices, especially a lot of things that are moving around a fair bit of data, that are not moving it securely. So you still find video cameras that are easy to hack into and things like that. And so it’s easy to attach yourself to the video stream whether you’re authorized to or not. Those kinds of things are issues. Another thing that goes hand in hand with these small devices is that if they’re small and they’re relatively inexpensive, then it’s easy for people who would want to hack into them or want to exploit them to buy one or two and take them apart and find out what’s inside and how they work and all that kind of stuff. And so the physical security is generally lower than what it should be if you’re trying to build a product that can withstand even fairly unsophisticated attacks. And one of the big reasons that people will buy a couple of incidences in hardware to find out what’s in there is to find out what other weaknesses that they can exploit over a network or that are just a generic weakness of the product. We are seeing a lot of that kind of stuff going on. If people want to try to secure themselves against that, they need to take their physical security a little bit more serious than they do now.

SE: Based on that, what can be done from a security perspective in order to secure some of the devices that are out there already? What should be done with the devices that are absolutely insecure, totally proprietary so that they can’t be updated? What are we supposed to do with all this stuff? Throw it in a barrel and burn it?

Borza: There are some people that would like you to do that. That’s what planned obsolescence is all about. There isn’t really a silver bullet here. In some cases you can put those devices behind a firewall or behind an IoT hub that is able to make up for some of the inherent vulnerabilities in that device but essentially the device is going to have whatever vulnerabilities it has. If it can’t be updated those vulnerabilities will continue to exist. So if you can’t isolate the device or put it into a place where it’s not easy to get at from any kind of widely accessible network like the Internet then you don’t really have a way to win here. For those things we just have to depend on the fact that at some point they’ll be obsolete and unsupported and people will go away from using them in sort of the same way that people have replaced their old insecure home routers and Wifi access points with things have more reasonable security properties.

SE: With so many devices and network vulnerable, is it possible to be too paranoid?

Croome: In a lot of cases, this equipment has not been designed to be upgraded, and is running not necessarily cloaked stuff, but it’s sitting there and even the company has gone out of business so you’re not going to update it. The cat is already long out of the bag.

Jones: Security, at the end of the day, is a system issue, so we spend a lot of time talking about things to do at the processor or the architectural level, but you can do all of the protections in the world and secure boot and all of those things, and if you screw it up at system level, then your product is not secure. For example, within the RISC-V world, there’s a lot of debate within the RISC-V Foundation, where different working groups are looking at security, and things that can be done architecturally to RISC-V. But at the end of the day it still has to be handled at the system level. Further, security means many different things to many different people, whether it is intrusion inspection or cryptography or whatever. As such, it’s hard to address everybody’s paranoia at the architecture level. Tools can allow users to maybe do novel things like build custom interfaces or custom pins that only one user knows or understands but still, there are things we can do in the Foundation within the specifications to do things that better address security at the architecture level. We need to get on the ball and get those things ratified. Also, there are software companies coming into the market that have some pretty interesting solutions around security to help customers. But even then, unless you address it all the way up to the product then our efforts here in the foundation go unused.

SE: How do you address the fact that the company making the device may not be around to support it?

Borza: That’s a really tough one. Essentially what would have to happen is that the code would need to go into some kind of repository of obsolete device code and then somebody would have to have an interest in maintaining that. You could conceive of building a service like that, but it’s hard to know who’s going to pay for it. The cheapest devices are the old obsolete ones so the people who bought them in the first place didn’t spend much money on them; they probably aren’t going to pay to make them secure or to try to reinforce their security. As such, the industry as a whole doesn’t have a big interest in that. We have seen a few things that have been done using things like the MAC addresses for devices and using those as identifiers for the products that are problematic, and filtering the network traffic from those devices out of the network streams and that has limited applications and limited success, but at least it’s something so we are seeing some efforts like that on the part of some of the device hub manufacturers and things like that to fix up some of these things. But there isn’t really a widespread single or even multipronged solution that’s going to work for all cases.

McDermott: In a previous life I was involved with the Thread organization, which is one of these network protocols about how to manage these diverse IoT devices. And it’s great for the consumer space. You’ve got a connected lightbulb, you want to make it smart. You scan it with your phone, you onboard it to your private network, you screw it in, but how does that scale to the industrial side of things? Take a hotel, for example. How many light bulbs are in the hotel? This is not something that is managed; it’s leased: there’s a service provider, there’s someone who installs it, there’s a management, there’s a food chain here. If we talk about a secure password, who gets the password doing which bit? How do you work with these management structures that are already in place? You could have somebody build a smart light bulb today. They give their information to the next level of the food chain, they may well go out of business, but that guy has to be responsible for managing five years contract of supply and service. As a result, I think there’s going to be this recognition of how to fit into these existing methodologies and existing food chains and it’s not going to be a guy on a ladder, scanning a code on a single light bulb to confirm, that one’s in the bedroom and that one’s in the hallway. It’s got to be completely automated and transparent and very, very simple and secure.

Croome: If the device is relatively simple, then you have less of the problem which you’re talking about. But if the device is running a complex OS then I must admit I think it’s a very difficult question to answer. If that company goes out of business, who is financially motivated to fix the problem? The only way I can see that happening is if the platform that the product is running is some kind of standardized open source platform that gets a patch and that that gets fixed or there’s some company that you mandate to do that, but even that seems very complicated because coming back to a product that might have been done five years before and then suddenly having to re-engineer the code to fix a problem could be a massive engineering project.

I think it’s a very difficult question and I think the only answer to it probably is simplification. Keep the product as simple as you possibly can, for it to perform its function rather than over complicating it. And that maybe lends itself towards a more modular software strategy into those products, where you take bits that you need rather than the whole caboodle.

Koskinen: There’s maybe one more threat on the horizon. It’s easier to do exactly what the gentleman said, go through the software layer, but at some point you’re going to see that somebody cracks an IoT device with just an EM type of side channel attack. In my academic career, I did some hacking-type research and found it is possible to crack a secure embedded device in just seconds when you know what you’re doing. And with EM, you don’t even have to get that close. Again, I don’t think we’ll see that for a while but at some point when all this software stuff, and those doors have been closed, then you will probably see one. And there are things that you can do.

SE: From a consumer point of view, what can be done by the end user of an IoT device?

Borza: There are things that the consumer can do right off the bat. If anybody’s still operating their network, especially a WiFi network, where they don’t have encryption running on the network and WiFi security running on their wireless network, that’s a big problem for them. They should at least have that enabled, and that at least gets you started at protecting yourself. Right now, there’s still a lot of variability in the quality of IoT devices, and the quality of security solutions in those devices so buying from a reputable manufacturer is a good a start. It’s not a perfect solution, but it’s a good start. If the company is new and doing very cheap devices, you should be a little bit suspicious that maybe it’s not as secure as it could be or should be, and you have to wonder how long is that company going to be around. At the same time, new companies are the ones that produce the most interesting innovations. We’re going to see a lot of them and some of them are taking security seriously and trying to really address this in a reasonable way. Because it’s so early in the market, there isn’t really a consistent posture of security across the network or across manufacturers and even within a single manufacturer’s product line, they’re not necessarily consistent. Unfortunately, people have to take more responsibility than they should. We’d like to see these products updating themselves and patching their own firmware with security patches as new vulnerabilities are discovered and fixed. And for too many of the devices, that’s still not the case, it’s still something manual that the user has to do and that’s not really a good place to be. You look at the warnings that go along with doing a firmware update on a device and I’m squeamish about doing it for some of the devices I have, and I’m in the business. What hope does somebody who really doesn’t think about this stuff, they just buy it and hook it up and say, okay, I’m good to go? They don’t really have a hope, so industry has to be more responsible about this.

Croome: There was a famous crack against credit cards in Europe at the very beginning of them, which was totally a side channel attack. It was when you entered the pin code they were doing the test on each digit as you entered the pin code.

Koskinen: Even there you have to have physical contact, but with a good EM attack with a good antenna you can do it from a few meters away.

Croome: But I think there are ways around this. There’s all of the approaches even on the chip design side, but there’s a limit to this, because there is the question of how much money the hacker gets by doing whatever they’re doing. If it’s something really expensive, they get lots of money. If they get access to your credit card, they’re going to make a huge amount of effort. If it isn’t something that’s going to give them a great benefit like, ‘Oh, I just found out the temperature in their living room,’ what’s the economic value of that? Probably not high. The security needs to be balanced to what the product is actually doing.

Koskinen: All of that is more or less a pain, but the simplest thing you could do is just have it very low power and have a mask: something that is uncorrelated and masking there. It will cost you some energy, but that’s the simplest thing. It’s not foolproof, but it expands the attack time.

Croome: You can go also add tamper proofing. It gets very, very complicated. It gets very expensive, very complex.

McDermott: There is also the economic value. They also get the unintended misuse. In another previous life, I was involved in a business of very low cost microphones and you think of those in mobile, with three or four in there, with quite complicated DSPs to do this. You could put these in a motion detector, and because it is sound, it can be put in many different things. It’s great for that particular purpose. If you want to put it in a washing machine and detect the engine speed revs — it’s probably not a complicated sensor. So then the hacker is a microphone: it could record this conversation, so you need to be careful.

Croome: That points towards trying to do the processing as close to the source as possible because if you can decide, okay, that’s the washing machine that started or that’s the sound of breaking glass on the sensor and the backcall to that sensor is so low bandwidth that it’s impossible to get the sound off it then that becomes a really good way of inherently protecting the security of the device. So if you’ve got a low speed network that’s connected to the device where it’s impossible to actually get the data off the device and device is processing that information so that information is not available on the network interface, then I think you’re much more protected from both the privacy and the security perspective.

 

Related Stories

Tech Talk: HW Security

IoT Merging Into Data-Driven Design

Open-Source RISC-V Hardware And Security



Leave a Reply


(Note: This name will be displayed publicly)