Memory IP: From Cobblestone To Cornerstone

Embedded memory started as a foundational element in chip design and has now arrived as a substantial differentiating element.

popularity

Embedded, on-chip SRAM has been a fundamental building block for custom and standard chips for quite a while. When all this began, there were typically small SRAM blocks of on-chip memory supplemented by off-chip DRAM devices. Those off-chip devices became more sophisticated, with higher performance interfaces (e.g., GDDR6) or new form factors (e.g., HBM2 3D memory stacks). The on-chip memory portion continued to grow as well.

Today, over 60 percent of the silicon real-estate of an advanced FinFET-class design is typically occupied by on-chip memory. Single-port, two-port, pseudo two-port, fast cache and multiple flavors of register files are just some of the memory types that occupy all that silicon area. This memory is like a supporting fabric for the chip, facilitating the computation being performed in a ubiquitous way. With all those memories occupying all that area, the impact of increasing speed or reducing power/area, if even just by a small amount, can be quite significant. More on that in a moment.

Ternary content-addressable memory, or TCAM, is another type of memory that has increased in popularity. Normal memory structures return contents given an address. TCAMs, on the other hand, return the addresses that contain a given content. These types of memories are very useful for networking/switching applications with respect to packet forwarding.

In AI applications, memory is playing a very different role. While standard memory structures are the “supporting cast” for algorithms in typical processor chips, highly specialized memory structures are literally the primary building blocks for some new AI algorithms. “Near-memory” or “in-memory” techniques aim to bring data and data processing closer together to boost system performance.

By now, you are seeing where the title of this post fits. Embedded memory started as a foundational element in chip design and has now arrived as a substantial differentiating element.

Embedded memory design and support for off-chip HBM memory arrays play an important role in eSilicon’s ASIC strategy. More than half our employees are involved in memory or high-performance interface design. This kind of bench depth allows us to undertake some useful and relevant work. In the AI area, our neuASIC IP platform contains many memory-centric structures, such as pitch match memory and transpose memory functions. Another challenge for AI designs is optimal storage of the weights that result from training. In many cases, large numbers of these weights are zero or close to zero. Being able to intelligently deal with this sparse matrix scenario can save a tremendous amount of power. Enter eSilicon’s word all zero power saving memory, or WAZPS for short.

neuASIC Platform Architecture

The ability to customize and optimize memory has many interesting stories behind it. Recall the chip with over 60 percent memory real estate. Designs like this can have thousands of memory instances. We have analyzed many such designs for impact on performance, power or area. When we rank the results, it is often the case that only a handful of memory instances are driving the critical path for the parameter to be improved. Optimizing those instances with custom versions can be headline news.

An additional aspect of this customization is the fact that we use our memories in our ASICs as well as license them. This experience allows us to collaborate with customer architects to develop cutting edge ideas that would not be possible otherwise. New memory designs built this way are not available of-the-shelf with any IP vendor, but rather custom-built by eSilicon to meet the needs as defined by the architects. Our chip design experience also allows us to offer BIST implementation for these memories, as well as provide robust, validated silicon data.

eSilicon is developing a webinar that recounts some of these stories from the field. We’ll deliver it in two segments – how/why we optimize or customize memories and what the ultimate impact of that work turned out to be on the final design. Watch your inbox for more details. In the meantime, you can check out our white paper, “Realizing the benefits of 14/16nm technologies – custom memory IP optimization strategies.” Find it under the Choosing IP section of our white paper page.



Leave a Reply


(Note: This name will be displayed publicly)