Exploring System Architectures For Data-Intensive Applications

Computer, memory, network and storage performance have all evolved at different rates, causing data bottlenecks.

popularity

The exponential growth of digital data is being driven by a number of factors, including the burgeoning Internet of Things (IoT) and an increased reliance on complex analytics extracted from extremely large data sets. Perhaps not surprisingly, IDC analysts see digital data doubling roughly every two years. This dramatic growth continues to challenge, and in some cases, even outpace industry capabilities. Indeed, engineers are constantly attempting to stay one step ahead of bottlenecks when designing systems tasked with processing, storing and moving massive amounts of data.

Computer system architectures have evolved at a relatively slow cadence over the past two decades. As such, the growing gulf between technological demands and unprecedented data growth is placing salient pressure on traditional system paradigms. In addition, an increased emphasis on newer metrics and features such as power efficiency, total cost of ownership (TCO), compute density and elasticity is prompting system architects to evolve and adapt to the needs of emerging data-intensive workloads.

woo1

The memory hierarchy is a critical component of all computer systems—storing data and providing access to the CPU at varying latencies and bandwidths, depending on where in the hierarchy the data is stored. While the levels closest to the CPU offer the lowest latency and highest bandwidth, they are also somewhat limited by reduced capacity. Although latency and bandwidth are both important for rapid data processing, capacity also becomes an important factor in application performance as the amount of data increases.

Traditional hierarchies typically exhibit a significant latency and bandwidth gap between main memory and disks (both SSDs and HDDs). As data sets exceed the size of main memory, performance can drop dramatically due to the need for frequent disk accesses. In recent years, the industry has focused on this precise gap in the memory hierarchy, proposing 3D XPoint and a slew of storage-class memories to improve application performance.

Nevertheless, simply filling this gap in the memory hierarchy may no longer be sufficient to achieve large gains in performance and power efficiency. Compute, memory, network and storage performance have all evolved at disparate rates over the past two decades, and modern data-intensive applications are often bottlenecked by data traveling across network links. As the world’s digital data continues to grow, data transport must be minimized in order to improve performance and power efficiency. Flexible hardware acceleration is also likely to play a major role in achieving this goal.

It should also be noted that there is a growing interest in utilizing FPGAs to help improve power efficiency and to provide hardware acceleration capable of scaling to specific application requirements. By coupling FPGAs to large amounts of data, processing can be customized and moved directly to the data, thereby achieving the goal of minimizing data transport, all while accelerating computation in a power efficient manner.

At Rambus, we recently revealed details of our Smart Data Acceleration research program. Essentially, we’re exploring how system architectures could potentially evolve to meet the needs of modern data-intensive applications. As part of this program, we’ve developed a platform that allows us to better understand application performance and system architecture tradeoffs including software, firmware, FPGAs and large amounts of memory. The platform can be used to test new methods of optimizing and accelerating applications such as data analytics for extremely large data sets. Pictured below is one of the key building blocks in this platform – one that combines an FPGA with a large amount of memory capacity.

woo2

As modern applications such as data-intensive computing continue to drive the industry forward, architectures from enterprise data centers to high performance computing (HPC) will need to change and adapt to meet growing demands for improving performance, power efficiency and TCO. With increasing industry attention turning towards addressing the needs of modern applications, we’re very interested in seeing the latest findings and directions at top industry events such as Supercomputing and Intel Developer Forum. We’re excited to be researching solutions to these and other growing industry challenges as we look to collaborate with the industry on cutting-edge future technologies and solutions.



Leave a Reply


(Note: This name will be displayed publicly)