How To Safeguard Memory Interfaces By Design

DDR security strategies should enable ongoing adaptation to an evolving threat ecosystem.


By Dana Neustadter and Brett Murdock

In 2017, the credit bureau Equifax announced that hackers had breached its system, unleashing the personal information of 147-million people. As a result, the company has settled a class action suit for $425 million to aid those impacted, including identity theft, fraud, financial losses, and the expenses to clean up the damage. Whether the threat is identity theft, fraud, national security, safety, or some combination, data is at the center of it all and must be protected. While the Equifax hack was executed through a security flaw in a software tool, it’s a cautionary tale about why you need to protect data in your electronic systems. Vulnerabilities exist not only in software but hardware, too.

As systems become more complex, hardware can be a point of entry for those with nefarious intent. Multi-chip designs are gaining traction, and the attack surface is increasing.

Security is taking center stage in the semiconductor industry. Securing system-on-chip (SoC) interfaces and the data that moves across them can prevent data from being accessed, deleted, or otherwise manipulated by bad actors. Whether you are protecting data in high-performance computing (HPC), mobile, IoT, or automotive SoCs, security implementations need to be optimized to preserve the performance of the interfaces while reducing the impact on latency and area.

High bandwidth interfaces such as DDR are increasing, and their speeds continue to grow from generation to generation. If you want to protect your data, one of the key areas to secure is your off-chip dynamic random-access memory (DRAM).

For the better part of a decade, academic researchers warned of the possibility of Rowhammer attacks and Google’s Project Zero uncovered another vulnerability called RAMbleed, both particular to DRAM. DRAM vulnerabilities can be exploited in real-world scenarios, and it’s crucial to protect against attacks such as Rowhammer, RAMbleed, and cold boot attacks to keep bad actors from reading or corrupting data or retrieving cryptographic keys, which are fundamental to security. Since information is moving faster, systems are getting more complex, and the stakes are getting higher, securing data should not be an afterthought but an integral part of hardware design.

How memory attacks work

Here are a few examples of DRAM specific vulnerabilities:

  1. Rowhammer: Attackers who employ a Rowhammer strategy intend to modify or corrupt data. Rowhammer attacks read data in a memory row repeatedly at high speed, causing it to flip the bits (from 1 to 0 or 0 to 1) in the page table entries of adjacent rows. In this way, attackers can gain read-write access to the entire physical memory, according to the Project Zero team at Google. As DRAM chips shrink, they become even more vulnerable to this type of attack because transistors are packed more densely together, increasing the risk of spillover during such an attack.
  2. RAMBleed: RAMBleed is used for stealing data as it moves across systems. RAMBleed uses the same principles as the Rowhammer attack, but it reads the information instead of modifying it, threatening the confidentiality of the data stored in memory. By using RAMBleed, attackers can extract information from the DRAM.
  3. Cold-boot attacks: Attackers, in this case, have physical access to your system. They can use their access to do a hard reset on a specific system, access pre-boot physical memory data to retrieve encryption keys, and wreak havoc.

Safeguard your memory interfaces by design

Memory and storage security protects storage resources and their stored data, both in on-premise and external cloud data centers. As the need for higher capacity, faster access, and accelerated processing increases, designers are turning to high-performance, low-latency memory encryption solutions to preserve performance while protecting data over the latest generations of DDR, LPDDR, GDDR, and HBM memory interfaces.

Error Correction Code (ECC) was a popular protection mitigation strategy, but it only provides a limited level of resilience. ECC does not provide security as it leaves more vulnerabilities to undetected corruption, making it a naive approach to integrity protection for memories. Designers would often use ECC as a stopgap before adopting proper cryptographic algorithms.

The best approach to safeguard memory interfaces is to address the confidentiality and integrity of the data by design, with standards-based cryptography. For example, by using AES-XTS encryption for data confidentiality, Rowhammer attacks can be prevented. While parity/ECC can catch 1- or 2-bit flips, encryption covers all the bits. With encryption, the data written to memories looks more like random data and it will be nearly impossible to create Rowhammer patterns. Memory encryption and proper refresh of keys also protect against RAMBleed and cold-boot attacks. In addition to data confidentiality, security can be augmented with data authenticity that can be addressed using strategies such as cryptographic hashing algorithms to ensure malicious actors have not modified that data.

Making security part of your DDR interface design from the get-go is challenging. Security must be done right because one weak link can compromise the system and data. For example, it is critical for keys to be generated and managed in a trusted/secure area of the SoC and distributed via dedicated channels to the encryption module. Readback protection of keys and control configuration also need to be part of the overall security architecture.

Another challenge is that memory encryption comes with a cost, including overhead that will impact power, performance, area (PPA), and latency. Your challenge is making your DDR interface design secure, standards-compliant, and highly optimal.

We’ve witnessed rapid adoption of integrity and data encryption (IDE) security for PCI Express (PCIe) and Compute Express Link (CXL) interfaces. Now we are seeing a similar trajectory in memory interfaces, such as DDR and LPDDR. Since technology is ever-changing—criminals get smarter in their approaches as the engineers design smarter solutions—whatever security strategy you choose should enable ongoing adaptation to an evolving threat ecosystem.

Strategies for securing DDR interfaces in your SoC

Here are some strategies to help you get started to secure the DDR interfaces in your SoC:

  • Design a secure infrastructure foundation, including the control plane for authentication and key management and the data plane for data encryption and integrity.
  • Comply with standards. For memory data confidentiality, leverage standards-based cryptographic algorithms, like AES-XTS, with all key sizes, as defined by NIST SP800-38E.
  • Implement highly optimal solutions that can scale efficiently to support the latest bandwidths required for memory interfaces. Leverage pipelined architectures with efficient tweak calculation, key refresh, and low latency. Consider optimization options, such as running multiple AES rounds in a cycle and using specific AES S-box implementations for a more optimal area or maximum frequency.
  • Support per-region encryption/decryption to provide flexibility for various use cases.
  • Employ key generation and management in a secure environment. Memory encryption solutions require the control plane component for authentication and management. Typically, this is addressed by a secure enclave with root of trust. It needs to be adaptable via firmware updates to help future proof your key management strategy, including potential algorithm changes.

One solution you might employ to secure DRAM data is to key encrypt the data before sending it to the DDR controller, but this is not ideal as the encryption block must manage many actions to ensure that the packets are correctly sized. For example, an application writes one byte of memory – the encryption block will need to read that memory location, merge in the newly written byte, and finally write it back to the memory. The farther away the encryption is from memory, the more you must manage. This will impact your performance budget—a costly proposition for your memory bandwidth—and you must watch out for degradation because you are moving the data across the SoC.

The optimal solution is to tightly couple the encryption/decryption inside your DDR or LPDDR controller, allowing for maximum efficiency of the memory and the lowest overall latency. The controller is as close to the memory as you can get.

The complete Synopsys IME Security Module for DDR/LPDDR to protect data

The Synopsys Inline Memory Encryption (IME) Security Module for DDR/LPDDR helps ensure confidentiality of data in use through memory interfaces or stored in off-chip memory. It is a standards-compliant, certification-ready, out-of-the-box solution based on the AES-XTS algorithm, enabling highly efficient throughput for memory controllers, including Synopsys DDR5 or LPDDR5 Controllers. It supports all key sizes of the AES-XTS, including 128-bit, 256-bit keys with support for scalable 128-bit, 256-bit, and 512-bit data paths. The Inline Memory Encryption (IME) Security Module gives you per-region memory protection through per-address or sideband key selection, with very low latency, and can be tuned for particular applications with optimal PPA. Memory encryption is enabled inside our Synopsys DDR5 and LPDDR5 Controllers, saving your performance budget for better use and delivering the lowest latency in the industry.

As our world operates ever more frequently in the cloud, there is greater demand for more virtualization, which must be reflected in your memory protection. The Synopsys IME Security Module allows for managing data protection in various regions initiated from different virtual environments and is well suited for supporting a variety of cloud computing virtualization environments.

Synopsys offers complete standards-compliant Secure Interface Solutions for the most widely used protocols, which include PCIe integrity and data encryption (IDE), CXL IDE, DDR/LPDDR IME, HDMI/DisplayPort High-Definition Content Protection (HDCP) 2.3, Ethernet MACsec security modules, and more. The solutions address the most challenging demands and enable designers to quickly implement the required security in their SoCs with low risk and fast time to market.

Brett Murdock is director and product line manager for Memory Interface IP in the Synopsys Solutions Group.

Leave a Reply

(Note: This name will be displayed publicly)