First of two parts. Does your design contain a Trojan? Most people would never know, and do not have the ability to find the answer.
Very few companies ever had to worry about security until recently. Over the past couple of years, we have seen increasing evidence that our connected systems are vulnerable. The recent distributed denial of service (DDoS) attack, which made many Internet sites unavailable, has focused attention on Internet of Things (IoT) devices such as digital video recorders and cameras that have Internet access.
One chip manufacturer, Hangzhou XiongMai Technologies, was singled out as the source of last year’s cyber attack that blocked access to some of the biggest web sites in the world. The company recalled 10,000 webcams in the wake of that attack.
XiongMai is hardly the only company making vulnerable products. Many chipmakers and device makers never update their software to fix even known problems. As such, they represent a huge ongoing risk, and there is mounting pressure to make the companies producing these products liable for the damages.
Exerting that kind of pressure is difficult, though. Users who buy vulnerable devices have difficulty proving harm, and Internet companies affected by those devices don’t own them. And it’s unlikely that Internet companies would attempt to sue users of those devices. The legal system has yet to catch up.
Thankfully, most of the semiconductor industry is taking security more seriously these days. But how did the industry find itself in such a vulnerable position?
“Historically, the embedded market was technology that was limited in its connectivity to the outside world or in its ability to download and update code,” says Majid Bemanian, director of networking and storage, segment marketing for Imagination Technologies. “If your TV malfunctioned, you just had to power-cycle to remove the problem. In severe cases, you returned the TV to the shop for replacement or repair. Today, if the TV is a smart TV running a rich OS rather than simple dedicated hardware functions, any malfunction provides an opportunity for the platform to be compromised.”
Software is indeed part of the problem. “In the past, a company worried about the quality of their product in terms of RTL and verification, but it was in a constrained environment,” says Simon Davidmann, CEO of Imperas. “In the software world you build a device and it will typically sit connected to other devices and the world can attack it. Most people who build embedded products have no clue about the software stacks they have in their product. This is part of the problem. People download them, put them in their product and while it enables functionality, they don’t realize how open some of these things are and how people could break in.”
But software isn’t the whole problem. “When you design hardware, you need to make sure that other things are not put in there,” says Jeff Hutton, senior director for automotive business solutions in Synopsys. “The problem is that if you are not looking for them, typically you won’t find them. We write verification tasks for what we designed, but if someone added a snippet of design into the chip, then you never test for it.”
So how likely is it that your hardware contains a Trojan? “It is almost impossible to find malicious hardware that adds functionality in the design if it includes third-party IP blocks,” says Serge Leef, vice president of new ventures at Mentor Graphics. “Visual examination of the IP is not practical, and trying to find corner-case behaviors through simulation is also unlikely because you do not know what to look for, which is the whole point of the malicious code. So if something sneaks into the design, it is not findable.”
At the same time that systems are becoming connected, the SoCs powering the products are also changing. “When we put more functions together onto the same die, and considering the cost to design one, people tend to know the primary functions they wish to support,” says Drew Wingard, CTO at Sonics. “But there may be other behaviors that might be interesting to add later. That results in an architecture where things are pretty connected even though the fundamental use cases may not value those. It is cheap to make things more connected, and you are betting that you may find something useful for it later. But the goal to make the systems open and extensible really goes opposite the direction of making them limited to only their intended purposes.”
Today we are also seeing another type of flexibility being added to some SoCs. “FPGA are used a lot in aeronautical designs,” says David Kelf, vice president of marketing for OneSpin Solutions. “They are the most ultimately flexible devices and it is not hard to plant something into one.”
Synopsys’ Hutton agrees. “Mil/aero is getting increasingly concerned about FPGAs, where there are many gates that can be programmed and you can get a snippet of a program added to that. You may not know it is getting burned into the FPGA. One thing they do is that for every gate that is not being used, they put into a known state so it can be tested for that to see that no gate has been used for another purpose.”
Kelf says that other companies are looking to go further. “One company wants to make sure that no Trojan could be loaded into a RAM that gets loaded into an FPGA. They wanted a check put in place to ensure the contents of the memory has not been changed. You could put a checksum in, but it is too easy to deal with that by taking out pieces of code and putting in others to create the same checksum. They can get around all of the classic methods to check them. So they are looking for a formal algorithm that checks the design structure itself.”
Finding unintended behavior
You might imagine that there would be several tools available for such a universal problem, but that is not the case. “Doing a threat analysis as part of the Verification Plan is a must,” says Wingard. “It is the experts who have the best idea of where the vulnerabilities might be and the properties that a system should have to make it secure. An RTL design team is not likely to know that.”
Much of the responsibility for finding unintended functionality is directed towards the verification team. “There are some things that the verification team can do from a safety standpoint to make sure that it won’t do some things that you didn’t intend,” says Hutton. “This is not a security breach, but in many cases it is matter of, ‘Did it do something that I didn’t know about and didn’t intend to happen, but I never tested the behavior from this state?’ If that state occurred, would the system handle it?”
Many immediately look to formal to solve this problem. “Formal is ideal for this problem because you can ask things such as, ‘Will X ever happen?'” says Kelf. “In simulation you can’t do that. It is hard enough for simulation to show that something does happen, but to prove that something can never happen is impossible.”
But not everyone buys this answer. “Formal is a flawed approach because you are looking for something such as a particular kind of exploit,” says Leef. “It is good housekeeping practice to do this kind of check, but when you have a 3rd party IP block which has a case or switch statement and switches based on an 8-bit register value and there are only a few cases that are actually used, that leaves 250+ additional states in the state machine that are not used where an attacker could insert additional logic that maybe reachable through paths not included in their verification strategies.”
Frank Schirrmeister, senior group director for product management and marketing at Cadence, believes that a tiered approach is necessary. “There are layers. Portable Stimulus sits at the highest layer and enables you to verify pure functionality. As you go down you get to formal where you can act on assertions. They can be executed dynamically in simulation where the tests are developed and then you verify the assertions formal. At a lower level you can also check for the security vulnerabilities. With safety and security you can go further down and you get to fault simulation. Can I potentially get the design, in the presence of faults, into a state in which it is no longer secure?”
Portable Stimulus (PS) is a new verification methodology based on a verification intent model that may see a lot of development in 2017. “PS is a graph of all behaviors,” explains Adnan Hamid, chief executive officer of Breker Verification Systems. “So it is an executable specification. The graph could map into a transaction-level style of assertion that could be extracted and these assertions could then be formally proven and drive Coverage from the formal tool back onto the graph. We can also ask about the unintended functions. You could define scenarios you don’t want to see appear in the chip and generate the tests for that. Then you could see if all of the security mechanisms trigger correctly.”
But we cannot forget about the software. “The state space in software is much larger than for hardware, and the unintended behaviors is a much larger challenge,” says Davidmann. “And if you are picking up software that you have not written yourself, you have to provide a much better verification environment. There are things such as fuzzing, which is nothing more than injecting inputs into the code. What we learned in hardware can be used for software, but it is more difficult. Software has a much bigger problem with unintended behaviors and vulnerabilities.”
The second part of this article will focus on some of the steps people are taking to reduce the potential for unintended behaviors and the changes necessary in industry best practices to combat them.
IoT Security Risks Grow (Part 2)
Mirai, Shodan, and where the holes are in security; establishing a chain of trust from a solid root; how to future-proof security.
Side-Channel Attacks Make Devices Vulnerable
The number and type of attack vectors are increasing as more of the world becomes connected and vulnerable to hackers.
Grappling With IoT Security
Updating connected devices creates a whole new challenge as threats continue to evolve.