All-in-One Vs. Point Tools For Security

Security is a complex problem, and nothing lasts forever.

popularity

Security remains an urgent concern for builders of any system that might tempt attackers, but designers find themselves faced with a bewildering array of security options.

Some of those are point solutions for specific pieces of the security puzzle. Others bill themselves as all-in-one, where the whole puzzle filled in. Which approach is best depends on the resources you have available and your familiarity with security, as well as the sophistication of the attackers and the complexity of the attack surface.

“We’re still in the dark ages, trying to catch up to an adversary that seemingly is always coming up with a new and better approach to break into a system long before we’ve even thought about being able to check on it,” said John Hallman, product manager for trust and security at OneSpin Solutions. “We need to understand what are the characteristics that would jump this race back closer into the realm where we might be able to better attack the attacker.”

Point tool providers claim they do a better job at their specialties than is possible for a company that’s doing the whole thing. Meanwhile, all-in-one providers offer to solve the complete security problem in one fell swoop. There are even all-in-one solutions that license and incorporate point tools that are available separately. Some solutions are tied to specific hardware platforms, others are generic. It can truly be overwhelming to contemplate all of the possibilities, but at least there are some basic building blocks in place.

“Security is always a system question,” said Helena Handschuh, a fellow Rambus Security Technologies. “You have to consider how your device or how your chip, or even lower your IP fits into the rest of the system. So, of course, you have to ask yourself more questions. What are the new threat models around the new vertical you’re trying to go into? That will change a number of things. But fortunately you can have some basic building blocks that are always kind of the same to solve security aspects. And those ones can be built with the same type of architecture. Then it’s a question of performance and throughput. But regardless of whether that’s going to work or not, the basics are always the same. You need some crypto, you need cryptography algorithms, and you need acceleration if performance or bandwidth is going to be an issue. And you need to have some notion of trusted execution environment.”

Security everywhere
There are innumerable lists of requirements for declaring a system or process to be secure. Figuring it all out from scratch can take time for companies with little experience in security.

“[Designers need to ask,] ’What security levers do I want to pull?’ That could take a month,” said Erik Wood, director of IoT secure microcontroller product line at Infineon. “Then they do some algorithm work importing drivers. ‘Do I want to use ECC? RSA?’ Then they figure out how to manage their keys. So they’re about two months in and maybe at the end of that they spend $50,000 talking to a consultant to say, ‘Should I do it this way?’ It takes about eight months, even if it’s based on open source.”

For many engineers, this is a whole new challenge. “I’ve done IoT designs my entire career, and we knew how to make great smart meters,” said Mark Thompson, senior vice president of product management at Keyfactor. “We knew how to make great med devices or sensors — whatever we were doing. We didn’t know anything about security.”

Scott Jones, managing director for the Micros, Security & Software Business Unit at Maxim Integrated, summarized the ongoing challenge: “The attack capabilities are never-ending, and we are on a constant treadmill to stay ahead.”

All in all, the security measures put in place must satisfy a number of high-level goals, as illustrated in the following image.

Fig. 1: Seven important aspects of security. Source: Microsoft/Rambus

Security has become critical at every stage of the lifecycle of a system, but it can be broken down across seven different categories:

  1. Planning and architecture. This is where risk assessment is done and threats are identified. One must consider the big picture here, since an asset of interest may not lie within the device being designed. If the device is, for example, a thermostat, then yes, you need to consider whether someone might maliciously try to change the settings. But someone may also try to hack in only to get onto the network, making it easier to access other assets that may have nothing to do with the thermostat.
  2. Design supply chain. This establishes whether all of the pieces of a design — IP, libraries, etc. — are known to be “clean.” The final system will be only as clean as its components.
  3. Hardware security. This includes integrated hardware roots of trust (HRoT) and external secure elements (SEs). Perhaps a trusted execution environment (TEE) is needed, or cryptography accelerators. Protected zones in memory are also established here.
  4. Software security. This involves anything that wasn’t done in hardware as well as all of the associated stacks. Of particular importance here is secure boot, which is tied to hardware.
  5. Manufacturing supply chain. This is where trusted (or well-vetted) suppliers are identified. They could be silicon fabs, packaging houses, testing facilities, and board-stuffing houses. This is of particular concern for any suppliers involved in the security-provisioning process.
  6. Manufacturing and provisioning. This is where individual devices acquire their unique identifiers and keys. Those keys are typically registered in a database, unless they’re generated internally by a mechanism like a PUF.
  7. Ongoing monitoring and updates after deployment. This includes on-boarding (where the device phones home for initial registration and activation), monitoring for attacks, and over-the-air (OTA) updates to improve security.

At various stages, it’s also helpful to have bashing sessions where the security is tested by exposing it to experts that know how to exploit any weaknesses. It’s also worth noting that our focus here is on processor-centric flows. “Security in an FPGA is very different from security for processors,” said Geoff Tate, CEO of Flex Logix. “For many reasons it is much easier to ‘hack’ into processors.”

Of these, all but the first and the fifth have security products that can help with management. In the planning stage, there would appear to be no shortcuts or automation available as an alternative to thorough planning. There are known issues to look for and checklists to check, but each system and its intended deployment must be considered unique, so the difficult up-front work cannot be glossed over.

Likewise, vetting a manufacturer is a matter of qualification, and there are various proofs and pieces of evidence that prospective customers are going to look for. This potentially could be daunting for a company newly involved in building a secure system, while experienced companies will likely have internal best practices that drive their vetting process.

But even they mess up sometimes. “The model of security is that someone is trying to do something bad,” said Alric Althoff, senior hardware security engineer at Tortuga Logic. “The attack vectors are often misconfigurations and simple mistakes. Diversity in a design can cause problems, and what we’ve seen is that no one sacrifices speed for anything. So if you can get the same customer satisfaction for less cost, are you willing to deal with potential incompatibility? And what will you pay for something intangible, like security? In order to make that secure, you need to get into an audit chain with software, standards and compliance.”

Designing in security
The remaining segments involve either development or purchase of hardware, software, or services, and three of those reflect design-time considerations. The first establishes the provenance of externally sourced hardware and software. Part of this involves vetting in the same way that manufacturing contractors must be qualified. Knowing that code or IP has been faithfully delivered is of no value if you don’t have inherent trust in the provider.

Given trusted suppliers, it’s important to ensure deliveries haven’t been tampered with before delivery. Software (or design information in soft form — even GDSII format) must be signed to assure its integrity. That signature can be verified on receipt to confirm that nothing changed en route.

Signing software involves keys, and those keys must be managed in a way that allows their use without letting them out into the wild. While that might seem obvious, software teams intently focused on getting new code out may have a hard time spending time on the infrastructure needed for securely signing code.

Hardware, however, isn’t so straightforward. There is no signing mechanism. “For the actual silicon, the usual threat model is that of evil foundry or infiltration of the supply chain,” said Matjaz Breskvar, CEO of Beyond Semi. “Let’s assume I send GDSII to the foundry and get silicon back some months later. Being able to run scan [testing] will allow me to detect many, but definitely not all, backdoors. If I am really concerned about the silicon, I can use imaging and reverse-engineering techniques and compare a few randomly sampled ASICs with the GDSII.”

The next two items — hardware IP and software stacks — work together to build a system that is inherently secure by design. Often referred to as a secure execution environment (SEE) or a secure enclave, such an architecture provides isolation and limited access for critical functions.


Fig. 2: An example of a secure execution environment. Source: Rambus

An important first goal is to provide a multi-phase secure boot operation that makes it difficult for an attacker to replace good code with bad. The first phase pulls code from ROM and sets up the basic infrastructure for verifying the remaining code. The ROM code, which attests all other code, has to be immutable and must be attested by the hardware — for instance, by a signature burned into e-fuses. Above that, code signatures can be recalculated and compared with known signatures to confirm that all code is legitimate before loading the OS and any operational applications.

The hardware element has several components. The most important becomes the partitioning of the system into secure and non-secure portions. This is a harder problem for very small, inexpensive systems that can’t afford the higher-end microcontrollers with built-in features like Arm’s TrustZone.

At the very least, memory must be partitioned in a way that provides some isolation for sensitive code and data. “You can use the submarine example, where you have all these doors that close off hatches and individual locations,” said Wood. “If you happen to spring a leak here, you don’t want it to flood the whole place.” Keys must be stored somewhere, unless they’re natively generated using a PUF, and it must be impossible for outsiders to view those keys.

This is particularly important for small systems where the cryptography is performed in software. Because such sensitive computations must be done away from prying eyes, they need to be executed only within the secure enclave, where the main system OS has no access to them.

“When crypto is performed in software, the software runs only in the TrustZone and is completely isolated from Linux,” said Infineon’s Wood. “The hardware isolates the memory used for TrustZone, so it’s not accessible from Linux. In addition, some chips provide on-the-fly encryption when using external RAM so that hardware snooping is also not possible.”

Any other mission-critical applications that shouldn’t be snooped can also be executed here. “Many of our customers are adding very valuable code to the devices,” said Larry O’Connell, vice president of marketing at Sequitur Labs. “That code simply cannot be compromised, can’t be stolen, can’t be corrupted.”

Secure elements can help with this. They’re specifically designed to be robust against attacks, and they never release the keys. All operations involving keys are performed internal to the SE, so the only I/O that can be snooped would be the inputs to a cryptographic operation and the result. “Bus snooping is a valid threat for external secure elements. Tamper detection can help to mitigate it,” said O’Connell.

Security during manufacturing
Once the design is complete, a whole new security regime is engaged. The certificates used by IP and software suppliers will give way to a certificate for the entire system, signed by the system builder. That becomes the root of security as the systems are manufactured and used.

Each system unit must have a unique identity, and that identity is established during manufacturing. During the process, the unit is provided with one or more keys. If the system leverages a PUF for that key, then instead of installing keys, the PUF enrollment process establishes the key (and any helper data needed to reliably read that key throughout the life of the system).

For systems receiving a key, the process must take place in a highly secure and trusted facility, typically using trusted platform modules (TPMs) or a secure cloud connection. The system identity and key information must then be securely stored for later use during system operation. The existence of a centralized location for all of the key information creates a tempting target, so the method of that storage must be extremely well protected.

This provisioning capability is a service sold by various providers. It’s possible for it to continue on through the life of the product. Alternatively, some services can be transferred so that the system builder can take control using their own data center.

Security during operation
Finally, the system will be shipped and operated for the remainder of its life. On its first start-up, the device typically will need to establish a connection with the cloud for so-called on-boarding. That process confirms the identity and key content of the system and enables its ongoing use. This is where the service providing provisioning continues into the life of the product.

Once in operation, monitoring — also a service — can identify attempted attacks. “We’d like to get not only health and diagnostic metrics, but also any evidence of a threat against the device,” said O’Connell. Exactly what is monitored — and how that data is distributed — raises privacy considerations. In theory, there’s no technical limitation on what data can be sent to the monitoring agent in the cloud. It becomes one of policy rather than technology.

“From a business policy perspective, we do not want to see that [non-security-related data],” said Abhijeet Rane, vice president of business development at Sequitur Labs, citing liability concerns.

An important capability during the operational life of the system is over-the-air updating. While this sounds primarily like a communications challenge, security is critical in ensuring that the updated code is legitimate. “If you want to be able to update the software, you need to have some way of checking the authenticity of the new software that replaces the currently running firmware,” said Nicole Fern, senior hardware security engineer at Tortuga Logic.

Code signatures must be attested in the system before the update is accepted. “Nothing should be changed until there is a certificate associated with a new deliverable,” said Tom Katsioulas, head of trustchain business at Mentor, a Siemens Business.

If the update fails in any way while being installed, it’s best to have a failover mechanism so that the device can revert to the prior revision. Some systems may even keep older versions resident so that any issues encountered during operation can cause a failover without the need for re-downloading and installing the prior image. Such a fail-over strategy will require more storage, but it can be a savior for mission-critical functions.

Fig. 3: An example of fail-over in the event of a boot failure with an updated version of the system. Source: Sequitur Labs

The cloud integration aspects — whether during manufacturing or operational life — themselves can be confusing, because no two clouds operate identically. “There’s one piece that the cloud vendors leave out, and that is how to get credentials into the device, and their advice to the customer is talk to your certificate provider,” said Sequitur’s Rane.

Others cite similar problems. “Cloud vendors are telling customers to talk to their silicon vendors and vice versa,” said O’Connell. “It’s sort of a mystery for our customers to solve.”

Each cloud has its own protocols and APIs, and a given system may end up with different stock-keeping units (SKUs) for devices targeting AWS or Azure or any other cloud service. “I have the AWS IoT Core library and the Azure core library as well,” said Wood. “The customer would have to write some customized code in there that says, ‘With this header, you’re being asked to deliver through this cloud, therefore, channel your pipe that way.’ Now I have a SKU for AWS, and I have a SKU for Azure. Each one of them wants an attestation done with their own little secret sauce.”

Finally, at the end of the system’s useful life, comes a step that is often neglected — revoking the certificates so that they can’t be re-used by an attacker trying to feign legitimacy with active credentials.

What does all-in-one mean?
It would seem to be pretty obvious what an all-in-one* offering would mean: all of the pieces of the security puzzle, for the entire lifecycle of a system, can be acquired from a single company. And yet, it’s not quite as simple as that. Various companies claim to be all-in-one, and yet elements are missing for at least some types of system.

The benefit of such an all-in-one offering is that it removes much of the security burden from the design team. Small, nimble organizations will want to focus all of their energy on creating applications that perform the main missions of the system. Security is a “peripheral” consideration — like an operating system. You need it, but you don’t want to be spending your time on it.

So, for small organizations, having one company handle everything can mean both that more energy goes into system functionality and that less time is required to study up on security, helping speed systems to market. “There aren’t that many security engineers in the world,” said John Hallman, AI side of things,
the the architecture is being determined by the capabilities of the interconnect. It’s not just what are the individual processing elements? It’s how do you get data between the processing elements and a whole bunch of local memories, in a lot of these AI things for power as well as latency and bandwidth.. “There’s a lot of benefit to being able to use one tool throughout the flow. But you’re also going to lose out in the innovation of the different point solutions.”

That gets to the strength of point solutions: Because the provider focuses exclusively on a piece of the security picture, they may have a “better” offering. “This is our bread and butter,” said Admir Abdurahmanovic, vice president of strategy and partners and co-founder of PrimeKey. “We are very, very keen on never screwing up and on following the standards implemented in [lesser-known places like] Kuwait thoroughly.”

The challenge lies in making sure that all of the pieces are in place and then interconnecting them into a smoothly running security system. Hardware IP will have its interconnects well documented, and software will make its APIs available, but, as there are no standards at a low level, there’s no getting around the need to study every piece so that it can be put together properly. “[A system integrator has to] integrate another company’s best-in-class tool,” said Hallman. “And that integrator will take on that role of making the common interfaces between each of these disparate functions.”

But the distinction between all-in-one and point tools is also not clean. Keyfactor considers themselves to be an all-in-one provider, but they’ve chosen PrimeKey for their cryptography technology. In other words, some all-in-one offerings may feature technology that’s also available as a point tool.

The other caveat is that some all-in-one offerings are tied to a specific platform. Sequitur Labs’ solution is focused, at least for the moment, on Arm processors with TrustZone, which are higher-end cores. Infineon, via its acquisition of Cypress, has a full security suite tied to its PSoC 64 devices. The intent of the offering is the same as that of any all-in-one suite — making it easy for designers. But, in these cases, the capabilities cannot be implemented on just any hardware or SoC.

Fig. 4: A secure execution environment provided for a specific family of devices. Source: Arm/Infineon

Verifying security
Regardless of whether one assembles point solutions or engages an all-in-one solution, the resulting design must still be verified to ensure that the security is working as expected.

Formal verification is finding traction here because it can trace a potential weakness across a system, and even a system of systems. This is particularly valuable in safety-critical applications such as automotive.

“We’re getting requests a lot of requests as we move toward autonomous cars,” said OneSpin’s Hallman. “There are a lot of concerns about what can go into ICs, what can be verified in the ICs, and how well it can be verified. There are a lot of checks that need to be in place.”

Verification has long played an important role in mil/aero, as well, but typically that was pre-deployment. That is changing to deal with regular updates to algorithms and software.

“There’s a real opportunity for continuous verification after the IC has been fielded and is in the system” said Hallman. “They’re starting to use the digital twin idea of having virtualization off your system. After you find some vulnerability, you test it in that realm and make on-the-fly updates. Could you reprogram it? Could your change something virtually in the silicon and see very quickly and verify what that change will do? That concept is catching on. We’re going to need some type of continuous verification plan throughout a system’s lifecycle.”

This is essential because not every vulnerability can be identified in the architectural phase. Fern gave an example of an error in connecting up a TrustZone peripheral. “If you unintentionally ground the secure bit, which means that all the transactions are coming from a secure source, then you’ve essentially elevated the privileges of all of the bus transactions,” she said. Pre-silicon verification has access to all of the internal nodes of a chip design, so both formal and simulation/emulation solutions can be used.

Openness may make this verification easier. “We’re seeing a lot of emphasis on more transparency,” said Fern. “There are several initiatives, one spearheaded by Google called OpenTitan, that attempt to make a completely transparent, open-source hardware root of trust. The idea is that the more eyes on this design there are, the better verified it will be and the more secure we’ll be.”

Designing for security also bears some resemblance to designing for safety in vehicles and other systems where loss of life is a possibility. “Security and safety trace problems that come from different sources, but they can both have the same impact on a vehicle,” said Kurt Shuler, vice president of marketing at Arteris IP.

All in all, the security environment is not mature enough to have easy, off-the-shelf answers. “Security can mean a lot of things — and different things to different people,” said Flex Logix’s Tate.

Security for each device being designed must be planned out from scratch. “I can’t see one solution being able to cover all of the possible threat models, devices, use cases, and cost points,” said Fern. “And I think that might be why the ecosystem is so fragmented.”

There will be best-in-class point tools for some of the needs. Comprehensive solutions may also address many or all of the needs. And some of those all-in-one solutions may leverage other well-known point tools for key aspects of the suite. “They both have their own place,” said Neeraj Paliwal, vice president and general manager of security at Rambus. “And if one goes the point-tool route, then it’s good to seek the services of a consultancy, because many times when we are designing chips we don’t necessarily have an army of security experts on the payroll.”

With no one right answer, and with offerings changing regularly, designers must revisit the security landscape for each new design.

Related
Fundamental Changes In Economics Of Chip Security
More and higher value data, thinner chips and a shifting customer base are forcing long-overdue changes in semiconductor security.
What Makes A Chip Tamper-Proof?
Identifying attacks and protecting against them is still difficult, but there has been progress.
Battling Persistent Hacks At The Flash Level
Protecting the code involves more than just the processor.
Hardware Attack Surface Widening
Cable Haunt follows Spectre, Meltdown and Foreshadow as potential threat spreads beyond a single device; AI adds new uncertainty.



Leave a Reply


(Note: This name will be displayed publicly)