DDR security strategies should enable ongoing adaptation to an evolving threat ecosystem.
By Dana Neustadter and Brett Murdock
In 2017, the credit bureau Equifax announced that hackers had breached its system, unleashing the personal information of 147-million people. As a result, the company has settled a class action suit for $425 million to aid those impacted, including identity theft, fraud, financial losses, and the expenses to clean up the damage. Whether the threat is identity theft, fraud, national security, safety, or some combination, data is at the center of it all and must be protected. While the Equifax hack was executed through a security flaw in a software tool, it’s a cautionary tale about why you need to protect data in your electronic systems. Vulnerabilities exist not only in software but hardware, too.
As systems become more complex, hardware can be a point of entry for those with nefarious intent. Multi-chip designs are gaining traction, and the attack surface is increasing.
Security is taking center stage in the semiconductor industry. Securing system-on-chip (SoC) interfaces and the data that moves across them can prevent data from being accessed, deleted, or otherwise manipulated by bad actors. Whether you are protecting data in high-performance computing (HPC), mobile, IoT, or automotive SoCs, security implementations need to be optimized to preserve the performance of the interfaces while reducing the impact on latency and area.
High bandwidth interfaces such as DDR are increasing, and their speeds continue to grow from generation to generation. If you want to protect your data, one of the key areas to secure is your off-chip dynamic random-access memory (DRAM).
For the better part of a decade, academic researchers warned of the possibility of Rowhammer attacks and Google’s Project Zero uncovered another vulnerability called RAMbleed, both particular to DRAM. DRAM vulnerabilities can be exploited in real-world scenarios, and it’s crucial to protect against attacks such as Rowhammer, RAMbleed, and cold boot attacks to keep bad actors from reading or corrupting data or retrieving cryptographic keys, which are fundamental to security. Since information is moving faster, systems are getting more complex, and the stakes are getting higher, securing data should not be an afterthought but an integral part of hardware design.
Here are a few examples of DRAM specific vulnerabilities:
Memory and storage security protects storage resources and their stored data, both in on-premise and external cloud data centers. As the need for higher capacity, faster access, and accelerated processing increases, designers are turning to high-performance, low-latency memory encryption solutions to preserve performance while protecting data over the latest generations of DDR, LPDDR, GDDR, and HBM memory interfaces.
Error Correction Code (ECC) was a popular protection mitigation strategy, but it only provides a limited level of resilience. ECC does not provide security as it leaves more vulnerabilities to undetected corruption, making it a naive approach to integrity protection for memories. Designers would often use ECC as a stopgap before adopting proper cryptographic algorithms.
The best approach to safeguard memory interfaces is to address the confidentiality and integrity of the data by design, with standards-based cryptography. For example, by using AES-XTS encryption for data confidentiality, Rowhammer attacks can be prevented. While parity/ECC can catch 1- or 2-bit flips, encryption covers all the bits. With encryption, the data written to memories looks more like random data and it will be nearly impossible to create Rowhammer patterns. Memory encryption and proper refresh of keys also protect against RAMBleed and cold-boot attacks. In addition to data confidentiality, security can be augmented with data authenticity that can be addressed using strategies such as cryptographic hashing algorithms to ensure malicious actors have not modified that data.
Making security part of your DDR interface design from the get-go is challenging. Security must be done right because one weak link can compromise the system and data. For example, it is critical for keys to be generated and managed in a trusted/secure area of the SoC and distributed via dedicated channels to the encryption module. Readback protection of keys and control configuration also need to be part of the overall security architecture.
Another challenge is that memory encryption comes with a cost, including overhead that will impact power, performance, area (PPA), and latency. Your challenge is making your DDR interface design secure, standards-compliant, and highly optimal.
We’ve witnessed rapid adoption of integrity and data encryption (IDE) security for PCI Express (PCIe) and Compute Express Link (CXL) interfaces. Now we are seeing a similar trajectory in memory interfaces, such as DDR and LPDDR. Since technology is ever-changing—criminals get smarter in their approaches as the engineers design smarter solutions—whatever security strategy you choose should enable ongoing adaptation to an evolving threat ecosystem.
Here are some strategies to help you get started to secure the DDR interfaces in your SoC:
One solution you might employ to secure DRAM data is to key encrypt the data before sending it to the DDR controller, but this is not ideal as the encryption block must manage many actions to ensure that the packets are correctly sized. For example, an application writes one byte of memory – the encryption block will need to read that memory location, merge in the newly written byte, and finally write it back to the memory. The farther away the encryption is from memory, the more you must manage. This will impact your performance budget—a costly proposition for your memory bandwidth—and you must watch out for degradation because you are moving the data across the SoC.
The optimal solution is to tightly couple the encryption/decryption inside your DDR or LPDDR controller, allowing for maximum efficiency of the memory and the lowest overall latency. The controller is as close to the memory as you can get.
The Synopsys Inline Memory Encryption (IME) Security Module for DDR/LPDDR helps ensure confidentiality of data in use through memory interfaces or stored in off-chip memory. It is a standards-compliant, certification-ready, out-of-the-box solution based on the AES-XTS algorithm, enabling highly efficient throughput for memory controllers, including Synopsys DDR5 or LPDDR5 Controllers. It supports all key sizes of the AES-XTS, including 128-bit, 256-bit keys with support for scalable 128-bit, 256-bit, and 512-bit data paths. The Inline Memory Encryption (IME) Security Module gives you per-region memory protection through per-address or sideband key selection, with very low latency, and can be tuned for particular applications with optimal PPA. Memory encryption is enabled inside our Synopsys DDR5 and LPDDR5 Controllers, saving your performance budget for better use and delivering the lowest latency in the industry.
As our world operates ever more frequently in the cloud, there is greater demand for more virtualization, which must be reflected in your memory protection. The Synopsys IME Security Module allows for managing data protection in various regions initiated from different virtual environments and is well suited for supporting a variety of cloud computing virtualization environments.
Synopsys offers complete standards-compliant Secure Interface Solutions for the most widely used protocols, which include PCIe integrity and data encryption (IDE), CXL IDE, DDR/LPDDR IME, HDMI/DisplayPort High-Definition Content Protection (HDCP) 2.3, Ethernet MACsec security modules, and more. The solutions address the most challenging demands and enable designers to quickly implement the required security in their SoCs with low risk and fast time to market.
Brett Murdock is director and product line manager for Memory Interface IP in the Synopsys Solutions Group.
Leave a Reply