From The Data Center To The Mobile Edge

Exploring new system architectures will be necessary for the IoT.

popularity

At the heart of the Internet of Things is the complex interplay between the needs for both low power and high performance (LPHP), a perplexing challenge rooted in the de-facto bifurcation of the IoT itself. For example, lower power mobile devices, systems and lite endpoints make up the vast majority of forward-facing consumer infrastructure, while high-performance servers at the back end are tasked with moving, storing and analyzing vast quantities of IoT data.

This paradigm presents its own set of challenges for modern data centers which are struggling with the limitations of von Neumann architecture, bottlenecks caused by legacy system design and the slowing of Moore’s Law. Fortunately, the recent introduction of DDR4 has positively impacted data centers by delivering up to 1.5x performance improvement over the previous generation (DDR3), while reducing power by 25% on the memory interface. Upgrading to DDR4 translates into approximately 8% power savings in the overall data center.

Concurrently, the introduction of DDR4 server memory buffer chipsets is playing a critical role in helping to maximize the full potential of high-speed DDR4 designs. DDR4 buffer chips allow server designers to achieve the high speeds that DDR4 offers, while enabling the higher capacity designs that today’s Big Data applications require. Real-world benefits of DDR4 buffer chips include high memory bandwidths together with high memory capacity, as well as optimized signal integrity. The architectural benefits help to optimize performance, especially for latency-sensitive applications.

In addition to DDR4 buffer chips, the semiconductor industry is examining several techniques for evolving system architectures, including Near Data Processing to minimize data movement and energy consumption, along with hardware acceleration to bolster performance and power efficiency. Acceleration capabilities are now implemented across a wide range of silicon, including CPUs, GPUs and field-programmable gate arrays. GPUs are targeted at applications such as visualization, graphics processing, various types of scientific computations and machine learning. The combination of numerous parallel pipelines with high-bandwidth memory makes GPUs the compute engine of choice for such applications.

Meanwhile, FPGA acceleration is perhaps best suited for transaction processing, in-memory databases, financial services, real-time analytics and risk analysis, machine learning, imaging and transcoding. Moreover, when paired with CPUs, FPGAs provide application-specific hardware acceleration that can be updated over time. In addition, applications can be partitioned into segments that run most efficiently on the CPU and other parts which run most efficiently on the FPGA. Field programmable gate arrays can also be attached to the very same type of memories as CPUs.

In conclusion, the rise of the IoT has led to the rapid proliferation of lower power mobile devices, systems and lite endpoints. The increasing popularity of wearables and connected appliances, along with advances in low power design and future energy harvesting techniques, means that even more low power devices will be coming online in coming months and years. The growth of connected devices will result in an explosion of digital data – requiring both storage and analysis – that will strain server and data center resources for the foreseeable future. To address these concerns, the semiconductor industry will continue to evolve system architectures with the goals of improving performance and power efficiency, while reducing data movement and Total Cost of Ownership.



Leave a Reply


(Note: This name will be displayed publicly)