Balancing tradeoffs between bandwidth, capacity and power-efficiency.
Memory bandwidth is an ever-increasing critical bottleneck for a wide range of use cases and applications. These include artificial intelligence (AI), machine learning (ML), advanced driver-assistance systems (ADAS), as well as 5G wireless and wireline infrastructure. In addition to memory bottlenecks, the above-mentioned use cases and applications are rapidly hitting the real-world limits of traditional hardware architectures. In recent years, these limitations have acted as a catalyst for the rapid development of a new generation of accelerators and specialized silicon.
When choosing memory for accelerators and specialized silicon, system designers take a number of key metrics into consideration such as cost, power, capacity and performance.
Ultimately, memory selection is based on a complex set of tradeoffs that focus on specific application requirements. Current available memory systems include:
Driven by advances in Moore’s Law, system bottlenecks have shifted away from the processor to memory. Consequently, memory bandwidth has become a critical bottleneck for an automotive industry that is increasingly focused on designing vehicles with varying levels of autonomous capabilities.
This is because complex data processing for advanced ADAS systems (level 3 and higher) requires memory running at bandwidths of more than 200GBps.
Perhaps not surprisingly, memory bandwidth is also a critical bottleneck for artificial intelligence and machine learning applications. This is because compute capability is steadily increasing alongside rapidly expanding training sets. To be sure, AI training sets are increasing by a factor of 10X – per year!
In fact, AI training capability has grown 300,000X from 2012 to 2019, doubling approximately every 3.43 months. It should be noted that this increase represents a 25,000X higher improvement than Moore’s Law. At its apex, Moore’s Law ‘only’ saw transistors doubling every 18 months.
From our perspective, GDDR6 offers multiple benefits for AI/ML applications and advanced automotive systems. These include high bandwidth and high capacity; easier system integration (more traditional PCB and package design); and balanced tradeoffs between bandwidth, capacity and power-efficiency. However, GDDR6 also presents a number of system design challenges such as ensuring signal integrity (SI). Signal integrity becomes more difficult to maintain as speeds increase – and top-tier GDDR6 implementations are currently hitting data rates of 18Gbps.
In conclusion, high-performance AI and ADAS systems are expected to support massive data sets that continue to grow at exponential rates. Due to the benefits that a robust Moore’s Law provided in the past decades, significant advancements in computing power have shifted system performance bottlenecks from the processor to memory.
It is important to understand that no single memory solution will ever be the ‘best fit’ for all applications. Nevertheless, for AI and advanced ADAS systems, GDDR6 provides a good trade-off between bandwidth, capacity and power efficiency. As well, GDDR6 offers a balanced tradeoff between cost and design complexity.
Leave a Reply