Shifting Performance Bottlenecks Driving Change In Chip And System Architectures

Not all technology advances at the same pace.

popularity

The rise of personal computing in the 1980s — along with graphical user interfaces (GUIs) and applications ranging from office apps to databases — drove the demand for faster chips capable of removing processing bottlenecks and delivering a more responsive end-user experience. Indeed, the semiconductor industry has certainly come quite a long way since IBM launched its PC way back in 1981. Powered by an Intel 8088 CPU running at 4.77 MHz, the “original” PC packed 16 kB of RAM and ran (PC) DOS 1.0.

Moore’s Law has consistently enabled significant improvements in processor performance and functionality for several decades. The relentless delivery of increased transistor densities has allowed chip companies to design generations of new processors capable of handily surpassing their predecessors. But while processors have continued to improve at a pace unmatched by other subsystems, older processing bottlenecks have shifted toward different subsystems that haven’t kept pace, and newer bottlenecks have also arisen.

By the 1990s, memory systems were taking center stage, as CPUs were increasingly limited by memory bandwidth and latency, negatively affecting their ability to move data in and out of processors. At the dawn of the 21st century, mobile systems connected to large data centers were catalyzing a shift in computing paradigms that emphasized a fresh set of metrics related to cost, power and performance. More specifically, moving data across large distances (both within datacenters and between data centers and connected mobile systems) to the compute engines has become a key bottleneck.

Driven by a burgeoning Internet of Things (IoT), the world’s data continues to grow exponentially, with Big Data analyzed in more demanding and complex ways. The movement of data will only continue to intensify, directly influencing the power, performance and operating cost of future systems. System designers and software engineers are responding to these evolving demands by increasingly moving compute closer to the data to perform Near Data Processing to address the data movement bottleneck impacting both performance and power.

The breadth of computing applications, coupled with the rapid change of application behaviors and the need for flexible reconfiguration, also has prompted the industry to explore the potential of using FPGAs to offload and accelerate processing. In turn, these new paradigms are causing a serious re-examination of system architectures and data hierarchies; opening up fresh opportunities for technologies capable of filling new and widening gaps in the memory and storage hierarchies.

Software and applications have been steadily evolving, and in turn, have also positively contributed to changes in system architecture. Examples include virtualization, containers and frameworks such as Hadoop and Spark. These platforms can manipulate extremely large data sets by exercising underlying hardware in ways that differ from the dominant applications and frameworks of the ’80s and ’90s.

Software and application requirements are changing, with hardware bottlenecks shifting due to differences in the rates of technology advancement between various system components. Consequently, a new emphasis has been placed on low power and higher data transport rates, as well as technologies designed to move processing close to data. At Rambus, we’re excited to be researching solutions to these and other growing industry challenges. We look forward to collaborating with our customers and partners on cutting-edge future technologies and solutions.



Leave a Reply


(Note: This name will be displayed publicly)