The semiconductor ecosystem is beginning to identify security holes, what tools can be used to plug them, and what else is needed.
Intellectual Property security is coming under increasing scrutiny as concerns about system and hardware security escalate.
For IP, this is particularly critical because commercially available IP touches many players in the semiconductor and software ecosystem. IP users want to ensure they are using the IP as the provider intended and that they are protected against malicious code. IP vendors, meanwhile, need to know their IP isn’t being tampered with or propagated by unlicensed users. But how to achieve both of these ends isn’t always clear.
To some extent, at least, this is a verification problem. Whether existing verification tools are up to the extra task is a matter of debate, but the general consensus is that at least this doesn’t all have to start from scratch.
“Verification is the perfect solution for this, and more accurately formal verification and Assertion-based verification,” said Joe Hupcey, Questa product marketing manager at Mentor Graphics. “In assertion-based verification you can write an assertion or property — you can very concisely express the relationship between signals — and then the simulator or the Formal Verification tool will take that. In the case of simulator, it will always keep an eye out for whether the relationship between the signals is what the user specified in the assertion code. If something happens where the defined relationship is violated, the assertion fires to say it found a violation. The request should always come before the the grant granting a transaction to happen but not vice versa. If the reverse happens, there is a bug.”
The challenge is in creating the assertion in the first place to identify a problem. Done right, it can determine if a particular behavior does or doesn’t occur. If the circuit is behaving in a strange way due to the addition of malicious code, assertion-based verification can detect that. But the assertion has to be created to look for that.
“It’s not exhaustive because you have to write assertions for a whole variety of scenarios just like in constrained-random test, in which case you have to write a whole series of constraints and random tests to bury the parameters to see that this is behaving the way it was specified,” said Hupcey. “There can still be holes in that. The malicious code could still be hiding in there. That’s where using assertions with formal is the real key to success because you can take a formal tool and it will take those assertions and say, ‘That’s the way I need to prove that the way this assertion is written, the behavior between the signals are written that for all inputs and for all time that this is true,’ and it does this exhaustively.”
Assertions can also be used for quality assurance when delivering IP to licensees, almost like verification IP. The user accepts IP from the supplier, and when the IP is added into a system it ensures the IP is being used properly. “For example, they plugged in the signals the wrong way, or the bus that they tie it up is a 16-bit bus and not a 32-bit bus,” Hupcey explained. “The assertion will fire and say, ‘Hey wait, what are you doing? How come you have a 32-bit bus?’ In a similar manner, if something is out of kilter, an assertion can fire in a simulation and then again, when you take it to applying formal, it exhaustively searches the state space, and says, ‘Hey, what’s going on here? This is different from what you told me was supposed to happen and here’s exactly what’s going on.’”
This approach also can work for security path verification, where the user specifies the proper paths that sensitive data can go to. “It can only go from A to B to C, whether that’s your encryption key, your digital signature that will be used to sign the packets, or even something like a patient’s therapeutic parameters in their pacemaker,” he said. “Those are all the settings for when the pacemaker should fire. They are different for everybody, and you just want to make sure that only the right components can read that information and it can’t be corrupted in another manner. It doesn’t mean someone is trying to steal it. It’s that nobody can override it accidentally. The big difference here is you can verify that certain paths will formally verify, meaning exhaustively it does a mathematical proof that says only these paths you specify are legit. Presumably, if some attacker tries to insert something, that will be detected. It’s like, ‘Wait, there’s another path here all of a sudden that does not match the specification.’”
Regular testbenches can help, as well. They’re relatively sophisticated, so a hacker would really have to know how to get through it. At the same time, because the circuitry is so complex, it’s easier to hide malicious code without it being detected.
IP has always been a buyer beware type of market, but it was primarily a concern about the IP working and being fully characterized. Security is a whole new slice of risk for IP customers, and the more IP and the more complex the chip into which it is being integrated, the greater the risk.
“If I’ve put malicious code in, I’m going to hide that malicious code from the verification suite that I provide, so there’s not a whole that the receiver can do with the normal deliverables that would come from the IP creator,” said Drew Wingard, CTO of Sonics. “But a lot of blocks don’t have the ability to do very malicious things because of the way they live in the systems. If you’re a UART, you’re kind of a dumb peripheral. I can put as much malicious code as I want to in there, but the only thing I really do is raise my interrupt line more often than I should. I could be a nuisance, or I could muck with the data I’m supposed to be sending or that I’m supposed to be receiving. There’s little you could do there unless it is of course a very, very secure path and the data you are transiting is really valuable. But in the case of most of these UARTs, you don’t have a port through which you could try to store the information you were stealing.”
The risk changes significantly with an embedded processor, which can be used for a wide variety of purposes. “With those, the risks of malicious code are real,” Wingard said. “And there are two kinds—malicious hardware and malicious software. There’s not a whole lot I can do with verification IP to test against malicious software. It uses another way of doing that so the things you look at very closely are, first of all, do you believe that the provider you are working with is reliable and that the code is not going to infect you? Second, how do you make sure that the delivery of this happens in a reliable fashion so that nothing gets injected along the way?”
There are some techniques that can be done around the real memory image of the software code that will be compiled into log for the processor. That can involve a checksum, which is a very long string of numbers and letters that act as fingerprints for that particular file, to ensure the integrity of a file after it has been transmitted from one storage device to another. It also could involve some other difficult-to-fake mark to certify the code is authentic.
Wingard noted that IP providers need to make sure they are delivering that IP through a separate path to avoid interceptions and modifications. “There are similar techniques that have been deployed for RTL code on the hardware side, and there are classes of watermarking technologies that have been described. Sometimes they are used for IP identification, but you can also use them for essentially producing a unique signature that you can then match.”
And, in fact, solutions are being developed to ensure that IP doesn’t get modified. IPextreme introduced a fingerprinting application for IP last month that it has aimed at making sure companies understand which IP version is being used in their products. But the technology also can identify what’s been changed. “This has almost become a big data problem,” said Warren Savage, the company’s president and CEO. “But we’ve also been getting a lot of interest from automotive and mil/aero companies because it can be used to detect if the IP has been modified or a back door has been added. We scan the IP, but you cannot reverse engineer it from the scan.”
This attention to IP protection is new for many IP makers. While makers of processor cores have been working on securing their IP for some time, other IP vendors are just now beginning to consider it seriously.
“We’re seeing a lot of interest in [IP protection] and we’ve started to contemplate using some things like digital watermarking to watermark IP products—and to provide those IP products with certificates wrapping them in order to be able to assure the users that what they’re getting is the exact product that was intended by the IP provider that was delivered to them,” said Mike Borza, former CTO of Elliptic Technologies, member of technical staff at Synopsys.
Synopsys clearly understood these emerging issues, as evidenced by its acquisitions of Coverity, Elliptic and other security-related companies and technologies.
From a big-picture standpoint, all of this points to greater IP integrity, noted Manish Pandey, chief architect, new technologies at Synopsys. “This is very important because the IP customer needs to know that there are no hidden backdoors and other things are there. Another aspect is, once you have this IP, how do you ensure that it’s doing the right thing? Take, for example, a piece of IP that will be used as a building block for an application in the automotive space or IoT. There, security is a very important aspect — you don’t want anybody to be tampering with these devices that will be storing private keys, encryption keys — and how to ensure that things like key leakage does not happen requires a comprehensive simulation-based verification solution. Virtual prototyping also plays a role here in order to verify the design in the context of its operating system.”
In addition, he observed that one of the problems with developing solutions is that the knowledge of security involves different corner cases, which is fairly specialized, and the hardware and software have to work in concert. “You can’t just think of the security on the chip without the software. This requires with traditional verification approaches that we have the right transactors, the right error cases injected, the right buffer overflows injected. It can be done but it requires a great deal of knowledge to build it the right way, making sure the right use cases are built and then verifying them.”
There is increasing recognition that security and reliability and safety are different aspects of the same kinds of properties of systems. Coupled with that is an increasing demand for more reliable operation of so many aspects of IP and the system, and for tools that can help identify potential and real problems.
While these aspects traditionally were thought of in a vacuum, those days are gone. Security and reliability are now system-wide challenges, and increasingly will involve devices that connect to those systems. The semiconductor industry as a whole is on steep learning curve, but there much work also is underway to build systems where security is part of a baseline checklist.