Tackling Safety And Security

Experts at the Table: Who is responsible for safety and security and what can we do as an industry to make it better?

popularity

Semiconductor Engineering sat down to discuss industry attitudes towards safety and security with Dave Kelf, chief marketing officer for Breker Verification; Jacob Wiltgen, solutions architect for functional safety at Mentor, a Siemens Business; David Landoll, solutions architect for OneSpin Solutions; Dennis Ciplickas, vice president of characterization solutions at PDF Solutions; Andrew Dauman, vice president of engineering for Tortuga Logic; and Mike Bartley, chief executive officer for TV&S. This is the third part of this discussion. Part one can be found here, and part two is here

SE: Are we actually doing everything we could do for security and safety? The standards call for somewhat antiquated methodologies and techniques, and we actually know better. Is it state-of-the-art and are we really doing the best we can? Or is it what the industry believes they can tolerate?

Landoll: I hear that in DO-254 all the time. There is a push and pull that a lot of people doing implementations know that the energy being spent is not energy to make it safer, it is energy to get the compliance signed off. If I could put the energy into doing something else, I could actually make it safer. But I am not allowed to do that because it won’t be approved.

Wiltgen: If you look at the Mentor/Wilson Research survey, and the number of bugs that escape into production, even in safety-critical applications compared to non-critical applications there is not a huge difference. This makes you wonder what these standards are really doing. Why is the process not working?

Landoll: Because it is not focused on bugs. It is focused on meeting requirements and demonstrating that you comply to those. If there is an error in the requirements, you are saying that this accurately reflects that error. If there is a bug – well, you don’t care about that. Nobody counts them.

Bartley: Do we need a different definition of what a bug is?

Kelf: With a plane, only the pilots get to see some of the new technology. As passengers we don’t really care, we just care about the quality of the video. In a car, we are looking for the next fancy thing such as better radar. Car manufacturers are driven by features, and that is how they advertise them. So there is that pressure on one side, and making it safe on the other.

Landoll: One of the primary issues is that in automotive, they are trying to squeeze every penny. When a supplier says that they have a system and they can make it 10X safer, but it will double the price…

Kelf: Yes, you could decide to be the Volvo.

Landoll: But how much more is a consumer willing to pay for additional safety? How do you measure that?

Dauman: A lot of original compliance, especially in automotive and aerospace, came from regulation. For people who are not under those umbrellas, will we collaborate to make all of the rest of the system and network – IoT devices – safer, or will everything remain independent? Today, that is not happening because it is not regulated. If there is a community that is industry-driven, that is trying to push a combination of security, functional safety – then perhaps that moves faster because nobody want to wait for the regulation.

Landoll: I am not sure you can solve it with regulation. It feels as if regulation is a heavy hand and will always be late. Regulators tend not to be on top of the latest technology breakthroughs. They can’t be. So we have to find a way to motivate the industry itself to do the right thing – to do the best thing, because if they don’t, they will go out of business.

Ciplickas: Self-certification.

Landoll: Right. I don’t know how to enforce that or make it play out. I hope it is not by cars being released and crashing, and because of that they go out of business.

SE: With an automobile, you have something that is big enough, expensive enough that there are enough parameters that you can play with. When you buy your next smart lightbulb, will you check the security levels? And who will even tell you about it?

Dauman: As a consumer, you will buy by reputation.

Landoll: Not always.

Dauman: Okay, but often. When there are a plethora of options and one of them has a reputation, you might gravitate towards it.

SE: How do measure reputation?

Landoll: A lot of lightbulbs will be installed by builders that put them into new construction. They will likely just pick the cheapest vendor. They want to sell an Internet-connected house, but you could now be buying a house with 200 spam servers.

Dauman: This holds for all consumer products. There are review agencies and independent auditors. There is more peer review for every product today, and while not perfect, it tends to work. People communicate what they like and don’t like, and capitalism takes it from there.

SE: Regulation always comes too late. For a lightbulb you have to pass standards that relate to it blowing up.

Bartley: The standards tend to be more on the physical side of things and safety – not security. Nobody certifies for security today because the certification agencies have a brand reputation that they have to protect. If they certify for security and it turns out to be insecure, then their reputation is lost. They have built that over 150 years certifying crash helmets, panes of glass, lightbulbs, and they don’t want to touch security. There are people today selling lightbulbs with Arm processors in them.

Dauman: Security is measured not by being secure, but by having best practices. You have done everything that you could do. Did I do everything reasonable before I shipped the product? It doesn’t guarantee security. You can’t guarantee security. It is about a measure of comfort.

Bartley: If someone buys a cheap one, from a company that is new to the market, they don’t care about best practices.

Kelf: So you will end up with the media story again. My bank account was hacked by my lightbulb. Great story!

Ciplickas: We are two or three levels removed from the consumer. So what do we offer to the people who are designing the stuff that goes into products that can become the best practice?

Landoll: Do we have a coherent message that we can even tell people, ‘This is what you ought to do to adhere to best practices.’

Ciplickas: Trying to solve the problem of whether my home controller listens to me and broadcasts it to someone, I can’t solve that. But we can figure out what we can offer that increases the level of scrutiny they provide and the verification they have in there.

Kelf: And there is good thinking going on within the industry. There are people focusing on this problem and trying to come up with solutions.

SE: There will always be the unknown unknowns. The only way to address those is to make every product more complex in that it can now be updated, which creates more vulnerabilities. If we don’t allow things to evolve, then we arrive at where we are today with an insecure framework.

Bartley: You have to build in the ability to upgrade. And you have to find ways to do it securely. There is no way to get it right today, but it has to be fixable in the future.

Landoll: We have the necessary technology today to enable secure upgrades, but it has to be done by knowledgeable teams using some of the latest tools and techniques. I don’t think it’s even widely known within our industry.

Dauman: Right now, it is pretty disjointed. The architect of the chip, the design and implementation of that, and the verification of that—there is the system it goes into, which has its own mechanisms for doing that upgrade. And then there is the route from the source of the upgrade to it. They are all done independently today. What we can provide is to start building those pieces, specifying the design and methodologies to build each of them correctly, and then make them less disjointed.

Landoll: And making sure there are no gaps between them.

Ciplickas: We do have the technology. I come from the manufacturing side of things, but it used to be the race for processor speed. Then we found about the speed versus security tradeoffs. AI is growing because we have all of this bandwidth. We have all the compute power. So now we can do things that we never thought were possible.

SE: AI is inherently insecure, though.

Ciplickas: It was an example. If you look at the AI algorithm and how it works, it is very brute-force and requires a ton of processing power to make it work. That was not feasible 20 years ago, but because processors are now that powerful or you can build a custom processor that builds that algorithm in a straightforward way, the flows are there, security and safety aside. The technology is there to build it. So if that becomes the priority, back to the consumers pushing the companies who are then introspecting how to do it, then they have what it takes on the manufacturing side to make a run at it.

SE: What is the biggest problem the industry has to deal with today, and how do we solve that?

Dauman: When I look at on a timeline in a historical perspective, if you go back to about 1990 where it was functional verification then, it was still in a nascent phase compared to where it is today. The volume of designers versus verification engineers has flipped because of the importance of functional verification. It took 15 or 20 years to get there. It was investment by the chip companies saying, ‘I have to tape-out and have fewer re-spins.’ The technology to make that possible – better simulators, constrained random, formal verification – all had to occur over time. Where are we with safety and security? Functional safety is way ahead in places like avionics and automotive, but it has to be applied everywhere. Companies will start making that investment because they are hitting inflection points, but the community providing all of the pieces of the tools has to work together. If there are gaps, then here are holes. We have to have complete solutions from end-to-end design, implementation, field deployment and upgrade that plugs all of those gaps. We all have pieces to contribute, but it has to overlap more.

Kelf: We need to see a shift from verification to integrity. What is integrity? You bring in functional verification, infrastructure verification that we don’t talk about much, but which is a key thing. Does the chip work? Does cache coherence work? Then we have to address safety and then security. All of them have a bearing on each other. They build on each other. We are at a point where safety is being relatively understood, although there is still a way to go, security is a mash of so many issues that we are still trying to get our heads around, but we will. The real question is – what is the business driver? We understand automotive safety is driving that industry, but who else is driving security? We just started hearing about it from a base-station provider. Are we seeing that from enough customers to make it a priority and spend a lot of money on? It will take money and effort. When the business attitude changes, it will drive us to do more across the applications and segments, not just the ones with obvious safety issues.

Ciplickas: What I have been seeing is that there used to be a separate manufacturing and design and system build and deployment. The line is moving and bubbling up. Systems companies are building things themselves. They are building their own chips. The datacenter provider makes the racks that contain the boards that contain the chips that they designed. The problem with security is an opportunity – how can you use this new capability that they have as system providers to co-design all of this. What is the best thing to do in software, the best in hardware, the best thing to do between chips as opposed to on a chip? The best thing to do in a wired network or a wireless network? How do you get integrity? How do you service that? The opportunity is how to use this new capability that is present today and wasn’t before.

Bartley: I go back to cost of failure. That drives business. If you go to a board and say that this is how much it will cost us if we are not safe or secure, this is our risk exposure, then money will be put into it. The reason we moved so quickly in functional verification was the cost of failure. Companies had to invest in it, and as safety and security becomes more prevalent, people will start to put money into it. That will build competence, it will build companies that can deliver on it, products that are aware of it. The ability to upgrade in the field will become more important.

Landoll: What can our industry do? It gets down to ease of use and adoption. We talk about techniques, that can be done and best practices, but if we do not package them in a way that they can be disseminated to our own design and verification engineers, so that they can become an expert by virtue of using the right tools — the right method without having to be an expert in that domain —I don’t think we can propagate these things. I am still talking to customers that are using directed test. They think UVM is too complicated. I see people with bad design practices. Even though we know today how to create a solid design and verification flow, there are people who don’t know how to do that. Now we are talking about the flows getting more complicated and talking about adding security and safety. They can hardly make a design. Part of it is a training problem, so we need to do a better job of articulating to our customers why they should be paying to have their engineers brought up to speed on the latest ideas and methods.

Wiltgen: I disagree with the statement about safety being well understood. There is a vast difference in the customer base developing ICs and the people who are developing safety workflows. Then there are the guys who have been in the business for a while. One of the challenges is educating customers that safety is a holistic approach. There is a supply chain, there is a traceability need across the supply chain. You need the toolchain in place and know how all of the pieces are connected. That is the challenge today. It cannot be point solutions that do a specific task. There needs to be a workflow in place to tackle the problems.



Leave a Reply


(Note: This name will be displayed publicly)