Content And Gaming Drive Design

Shift in focus has major implications for everything from architecture to time-to-market strategies.


By Pallab Chatterjee
This year’s IEDM conference will feature a non-device topic for the luncheon keynote from Masaaki Tsuruta, CTO of Sony on Interactive Gaming. The takeaway: Even in the heavy R&D and physics-centric world of devices, building for the end application has now become one of the top priorities in driving specifications.

Traditional compute systems were based on batch-mode processing, meaning they were optimized for generalized programming or scientific computing. Now the devices are split between creation and consumption machines for this media.

The mobile devices in the current generation are based on audio and video playback, audio and video capture and motion-based gaming. This is a dramatic shift in the architecture of these devices as they were originally made to be voice-based radio communicators.

The shift in content also impacts the data. The small data sets have now grown large in file size and large in block/string size. The content business is large, and as result it is making changes in specifications, hardware and use methods. In 2010, the copyright industry accounted for more than 6.4% of the U.S. GDP (according to the IIPA), or roughly $930 billion in content. Exports for this content accounted for $134 billion in foreign sales, far more than sectors such as aircraft, autos, and agriculture.

Major systems and architectural changes that have been created to this market such as higher speeds for USB, SATA, Display Port, HDMI, WiFi, Bluetooth, Zigbee, Zwave, and PCIe. At the component level, changes such as IPV6, Advance Format (AF) for disk drives, and new memory interfaces are being created. AF is a shift from the existing 512 bit blocks to 4K blocks for the storage units on rotating media. The change not only enables higher-speed access and increased densities, but it is optimized for large data sets rather than high numbers of small files. As these file blocks got longer the size of the drives got larger to accommodate the content. The typical high-end drive features a 1 Tbyte single platter and increasing density.

Similarly in processors the trend started with single CPU cores, and the graphics co-processor chip was used simply as a paint engine. As the amount of media content increased on these devices, new processor architectures, such as multicore, were implemented to allows for the many parallel compute tasks that are needed for streaming content playback. As is typical of most hardware, power optimization comes about with functional optimization. This resulted in multicore for general processing as well as specialized GPU processing for high-speed shading and physics effects in a low power format.

These hardware changes to address large streaming content (upwards of 10-Gbyte files) have moved toward new memory interfaces. The DDR3 and DDR4 formats, the Hybrid Memory Cube, hybrid persistent DRAM, high-speed SPI NOR flash, MLC and TLC flash and XDR/mobile XDR memory technologies all are targeting higher performance and capacity in a fixed power level. The goal is maximizing the performance/power ratio.

The diversity and quantity of content being made is driving these architectural advances, along with the firmware and operating system software that control them. These changes are now being brought to market on 18-to-24 month time frames vs. the 7-to-8 year cycles that were in place from the end of the 1950s through the 1990s.

Flexibility to new ideas, and most importantly, understanding and investigating the subtleties of this media content, are the keys to market penetration and an entrenched lifecycle for components. The standards generation cycle is too slow to be a driver for today’s world, and leaders in these markets need to embrace, review and get involved in both the content creation and playback spaces to effectively design next-generation IP and systems.

Leave a Reply

(Note: This name will be displayed publicly)