Big Data Needs Bigger Memory

Big Data will need to migrate to large on-chip SRAMs from DRAM.


By Rodrigo Liang

Over the last few decades, the semiconductor industry has focused its considerable technical investments in accelerating software applications.

Performance metrics for new semiconductor products are often correlated with their ability to lower the latency to access data required to run specific software applications. The need for increased performance from semiconductors continues to rise as the software code base continues to expand as it strives to provide more services at your fingertips anytime anywhere. And with the growth of interconnected applications generating, sharing and analyzing ever-growing data, the semiconductor industry is presented with unique opportunities to innovate to enable the applications of the future.

This drive to lower latency to access your data and increasing memory footprint has presented a new challenge for the semiconductor industry. Historically, the industry has tackled the challenge of providing performance improvements through frequency and density gains provided primarily with improvements in transistor technology — what has been popularly referred to as Moore’s Law. But the industry is now facing the challenge that those improvements have hit an inflection point. Memory capacity, both SRAM and DRAM, is not keeping up with the needs of big data applications and a growing user base. Interconnect technology struggles to continue to provide the growing demand in bandwidth to memory devices. Layered on top of this is power as a key constraint for most applications. Low-power requirements are driven by the demands of mobile applications and restrict the resources available in chip designs.

Furthermore, technology costs are rising as process nodes shrink from 28nm to 16/14nm and 10nm. Fewer and fewer chip designs can justify the additional costs of the smaller transistor geometries.

Semiconductor technologists have been answering the call with innovation. The early 2000s saw the emergence of the multicore revolution. It is not uncommon today for consumer products to employ four, eight or as many as 32 64-bit core processors to meet the needs of a throughput-computing world. The arrival of flash memory significantly changed the technology landscape. Lower latency access to disk and higher-reliability storage devices have resulted from this breakthrough. New power-management circuits and process techniques have resulted in substantial improvements in power efficiency — a requirement for most semiconductor designs today.

Why not Big Data meets Big Memory on-chip?
Future demand for Big Data will only get bigger. Demand for real-time data analytics that process such raw data will only increase. To support this trend requires a new memory system hierarchy that improves data access through shorter latency, higher capacity and improved bandwidth. The move to keep data from slower, higher latency storage devices to large-scale DRAM memory has already started and will continue as the world of Big Data and data analytics continues to grow.

As the real constraints of the semiconductor technology and power consumption grow, the inevitable is in front of us. Data that moved from storage devices to DRAM soon will need to migrate to large on-chip SRAMs. The industry is ready for such an inflection point and we can dream of the opportunities that such a breakthrough could bring.

—Rodrigo Liang is a senior vice president at Oracle.

Leave a Reply

(Note: This name will be displayed publicly)