Chips Can Boost Malware Immunity

Building security into hardware, and utilizing best practices, can make it much harder for hackers.

popularity

Security is becoming an increasingly important design element, fueled by increasingly sophisticated attacks, the growing use of technology in safety-critical applications, and the rising value of data nearly everywhere.

Hackers can unlock automobiles, phones, and smart locks by exploiting system design soft spots. They even can hack some mobile phones through always-on circuits when they are turned off. Earlier this year, Okta, a security firm that provides authentication services to many companies, also was hacked.

The Critical Vulnerability cyberattack, known as CVE – CVE-2022-1654 (mitre.org), and rated at a critical level of 9.9, put 90,000 websites at risk of being completely controlled by hackers. Even more alarming, much of this can slip by security software completely undetected. For example, Enterprise Security Information and Event Management (SIEM), an always-on cybersecurity analytics tool, could not detect 80% of cyberattack techniques, according to CardinalOps.

On the positive side, built-in hardware security will help developers strengthen system defense.

Trust no one
Threat actors always pretend to be somebody else, particularly those with credentials to access networks. Any part of those networks or systems should not grant access to anyone easily until the access request is fully authenticated. After authentication, access should be granted only one time, for a particular asset, at the time when it is requested. Additionally, the authentication must be presented with credible information such as account passwords, account credentials, and keys (API, SSH, encryption).

A relatively new concept called “zero trust” is being deployed to increase cybersecurity at all levels. The National Institute of Standards and Technology (NIST), part of the U.S. Department of Commerce, defines zero trust (ZT), in part, as “an evolving set of cybersecurity paradigms that moves defenses from static, network-based perimeters to focus on users, assets, and resources. A zero-trust architecture (ZTA) uses zero trust principles to plan industrial and enterprise infrastructure and workflows.”

Zero trust assumes there is no implicit trust granted to assets or user accounts based solely on their physical or network location or based on asset ownership. So local area networks would not be treated any differently than the internet, and no distinction would be made between enterprise and personally owned networks. Authentication and authorization (both subject and device) are discrete functions performed before establishing a session to an enterprise resource.

Zero trust focuses on protecting resources – including assets, services, workflows, network accounts, etc. – not network segments. The network location is no longer seen as the prime component of the security posture of the resource.”

NIST published the Zero Trust Architecture (nist.gov) guidelines, known as the NIST 700-800, in August 2020 to help enterprises defend against cyberattacks. NIST has also been instrumental in helping advance of international cybersecurity with its Cybersecurity Framework, NIST 700-207, which is used as the common cybersecurity language.


Fig. 1: The National Institute of Standards and Technology (NIST) defines the core zero trust logical components in its Zero Trust Architecture guidelines. Source: NIST

The zero trust policies define many user and application attributes to make it extremely difficult for imposters to fake their identities. The policies address users’ identity and credentials, whether in human form or machine language, including, device privilege, hardware type, location, OS and firmware versions, and installed applications.

Compared with a traditional static method, this dynamic process has increased cybersecurity significantly. Even if one asset is breached, the damage is confined.

How chips boost immunity to malware
Many of these cybersecurity defense mechanisms (zero trust, secure boot, authentication, secure key management, and side-channel attack protection) that used to be performed by enterprise software are now being done automatically at the chip or firmware level inside the device. That not only increases the overall performance of the system, but it also makes them more secure. It’s important to keep in mind, however, that there is more than one way to implement chip security.

“Everybody needs basic levels of security with secure boot, secured storage, and secured communications,” said Steve Hanna, distinguished engineer at Infineon. “All users and devices should be required to authenticate before they connect to sensitive servers or networks. For advanced protection, features like secure key management, NFC protection, Trusted Platform Modules (TPMs), and especially side-channel attack protection can now be implemented and integrated in separate silicon or a chiplet. For developers who are not security experts, the best approach is to hire a security professional to do a security evaluation. They will examine the security of the system to find weaknesses and assess the overall level of security relative to the needs. Then they will issue a security certificate under a system like Common Criteria from CISA so that customers can know how secure the product is.”

This is true for all types of hardware. “There are several different ways to handle zero trust with eFPGAs,” said Ralph Grundler, senior director of marketing and architecture solutions at Flex Logix. “One method employs obfuscation, where the confidential parts of the design are not programmed until manufacturing is complete. Other techniques use an eFPGA for encryption or as an electric watermark. Using an eFPGA makes dynamically modifying the algorithm or watermark key possible.”

The shift from a software-only solution to a combination of both is a big shift. “Cybersecurity begins with silicon security,” said Scott Best, director of anti-tamper security technology at Rambus. “It is impossible for a high-level application (e.g., a mobile payment app) to verify the authenticity of the hypervisor if the hypervisor is malicious. All of the layers implicitly depend on trust, and the only way to ensure trust throughout the system is to start with security anchored in hardware at the chip level. If you trust the silicon, then every ‘layer’ of software can be cryptographically signed and verified using a trusted public-key infrastructure to ensure authenticity of every layer in the stack, from CPU firmware all the way up through the hypervisor and high-level application. Without secure silicon, provisioned and tracked by a secure supply-chain system, and without a trusted public-key infrastructure, it is impossible to prove that something essential in the system isn’t a malicious substitute.”

Why is secure boot important?
Even before zero trust is implemented, “secure boot” needs to be in place. Malware attempts to tamper with operating systems, boot loaders, and boot ROMs by altering the operating system or boot loader codes. If a computer or an embedded system is infected, malware will take over during the reboot process and the whole system, including the network, will be compromised. Therefore, it is critical to secure the booting process, aka secure boot. Secure Boot is a feature of the Unified Extensible Firmware Interface (UEFI) 2.3.1 specification, which defines the interface between the operating system and the firmware/BIOS. It is supported by Windows and many other hardware platforms.

With secure boot, the hardware is preconfigured with trusted credentials. The CPU/owner of an embedded system, for example, first generates two different keys, one private and one public, using asymmetric encryption. The keys are a string of software codes. The private key is known to the owner and nobody else. The public key is known to anyone. Before the bootloader codes execute, the signatures generated by the bootloader using the public key are compared to the signature generated by the private key. These two keys are linked by a mathematical formula. The public key is used to encrypt, while the private key is used to decrypt.

Even if a threat actor attempts to sabotage the boot loader with malware, the threat actor will not know the mathematical formula. When the bootloader detects the missing valid signatures and keys required, the booting process stops.

Secure key management used in authentication is essential. Some good practices include splitting the keys into multiple pieces so the complete key is not known to only one individual, automating the key use time period and setting it up to expire automatically and avoiding hard coding the keys.

A common cyberattack tactic is the side-channel attack. Hackers can deploy different means to listen to a target device electronically. By successfully detecting the signal pattern generated by the device, they can decode passwords and identify authentication methods.

“Side-channel and fault attacks are inexpensive for hackers to deploy, and they can potentially extract keys or gain full control over a device,” said Jasper van Woudenberg, CTO of Riscure in North America. “Both attack classes require layered protection, ideally all the way from protocols through software and down to silicon. Countermeasures at the silicon level exist. For the actual resistance offered by the countermeasures, the devil is in the details. Naïve implementations may not actually be effective, or may come at a great overhead — or both. Like functional verification, it’s imperative to determine countermeasure effectiveness during design, as well as to determine the minimum number of places to apply them with the most effect. Simulation techniques exist that help a designer pinpoint exactly where potential weaknesses are, in order to make issues detectable and fixes actionable.”

Once hardware primitives with known side-channel or fault resistance are available, they can be used by systems programmers to secure firmware or applications. “For instance, a cryptographic core may support wrapped keys,” van Woudenberg said. “Software can now securely handle encrypted and signed keys without access to the actual keys. When key unwrapping is implemented in silicon using side-channel protection, such an architecture can support key management from distrusted software and from a distrusted environment where hardware attacks are possible. Similarly, a debugging block may support unlock through a passphrase, while it protects against fault attacks through redundant verification of that passphrase. In short, the security, and the total cost of getting to that security, depends not only on the security of the individual hardware and software components, but also on how they are combined.”

Security requires extensive testing
Licensing security IP has the advantage over developing SoC from scratch because the IP already has been proven and will have minimal security flaws.

“Users from the commercial and aerospace and defense communities want to secure their proprietary circuits using eFPGA IP,” said Andy Jaros, vice president of IP sales and marketing at Flex Logix. “Not only is the integrated FPGA fabric difficult to probe and reverse engineer, its contents are lost when the chip is powered off. Programming the FPGA in a secure environment also limits access to proprietary circuits just to those who have a need to know.”

During the IP and/or SoC implementation process, it is important to have a comprehensive verification procedure in place to ensure the most updated common weakness enumeration (CWE) vulnerability information from MITRE is available and verified. Additionally, the verification procedure should be in compliance with the latest security standards.

“There are many basic requirements for security,” said Mitchell Mlinar, vice president of engineering at Tortuga Logic. “It begins with the architecture of the system. In constructing the architecture – before you even build hardware or software – you must think of all of the use cases where security or secure assets are involved. This is difficult all by itself. Then as you start constructing your system, those use cases expand and evolve into specific tests in hardware and software. There are many blocks, such as hardware secure boot and key management, as well as software authentication and access control that should be available in the construction kit. But more important than the blocks themselves is how they interact with the rest of the system.”

This can get tricky. “One has to consider possible exploit scenarios and ensure these potential vulnerabilities cannot occur,” Mlinar said. “Security is a continuum, and all components are inter-dependent to some degree. For example, you can build a completely secure chip from a hardware perspective — and you should. Therefore, it is important to go through a comprehensive security verification process to ensure the implementation of IP and SoC is secure to start with. This means not only the hardware design that ensures critical design assets are inaccessible by hackers, but also constructing the silicon layout to prevent side-channel attacks that could extract hardware (or even software) encryption keys. But you need to perform legitimate operations on that chip, such as altering the boot code or accessing a secure asset, and that means the layers above it need to be secure, as well. If some application is allowed hardware access due to poor authentication or weak access control, that can just as easily compromise the entire chain of trust.”


Fig. 2: A comprehensive security verification process is important to ensure the implementation of security IP and/or SoC would achieve maximum security. Source: Tortuga Logic

And even if everything is done correctly, security is rarely permanent. “A product manufactured in 2012 is unlikely to be secure in 2022 without maintenance, and a product manufactured and considered secure in 2022 may not be secure in 2032,” said Mark Knight, director of architecture product management at Arm. “A vital goal of a secure development lifecycle (SDL) is to determine the appropriate response to foreseeable security threats so that a product will be protected throughout the intended lifecycle. This involves understanding the likelihood and potential impact of a threat so that products can be positioned on the right part of a risk curve. Mitigations to security risks can take many forms – technical, compensating controls or commercial measures. Penetration testing and evaluation by an experienced third-party or independent test lab are two of the best ways to be assured that a product is secure against the latest attack techniques, and can therefore increase the product’s durability.”

Certification also potentially can be a competitive edge in the market. “Certification can enable vendors to showcase this ‘invisible’ work to their customers,” said Knight. “Cyber insurance also may be worth considering, as this can provide business-level mitigation alongside technical best practices. In addition, insurers are increasingly looking for evidence that deployed products meet best practice security standards.”

Secure software and firmware updates
How to securely update firmware is an important consideration in designing secure systems. To support this effort, the Trusted Computing Group (TCG) recently published the TCG Guidance for Secure Update of Software and Firmware on Embedded Systems. With reference to the older version of NIST’s Basic Input/Output System (BIOS) Protection Guidelines, the TCG Guidance provides detailed guidelines for securely updating software and firmware.

TCG is a nonprofit organization supported by leading members including AMD, Cisco, Dell, Google, Hewlett-Packard, Huawei, IBM, Infineon, Intel, Juniper, Lenovo, Microsoft, and Toyota. Standards-based Trusted Platform Module (TPM) technologies have been deployed by its members to secure enterprise systems, storage systems, networks, embedded systems, and mobile devices against cyberattacks. The TPM 2.0 Library Specification is now an International Organization for Standardization/ International Electrotechnical Commission (ISO/IEC) standard.

Nothing lasts forever when it comes to security. But implementing zero trust, secure software updates, and semiconductor security will enable developers to have a much better chance in defending against hackers throughout a device’s lifetime.

Further Reading:
Technical papers on Security
Chip Substitutions Raising Security Concerns
Lots of unknowns will persist for decades across multiple market segments.
Building Security Into ICs From The Ground Up
No-click and blockchain attacks point to increasing hacker sophistication, requiring much earlier focus on potential security risks and solutions.
Hiding Security Keys Using ReRAM PUFs
How two different technologies are being combined to create a unique and inexpensive security solution.
Verifying Side-Channel Security Pre-Silicon
Complexity and new applications are pushing security much further to the left in the design flow.

 



Leave a Reply


(Note: This name will be displayed publicly)