System Design Enabling Surround Computing

The future is multi-platform, with natural human input and realistic output.

popularity

For a while now I have been wondering about the next killer application driving electronics. During CDNLive in Austin a couple of weeks ago, Dr. Lisa Su, at the time still Chief Operating Officer and since October 7th president and CEO at AMD, gave some answers at a keynote titled, “The Trends Redefining Our Industry.” The answer may well be “surround computing.”

Su identified a couple of key trends, first amongst them that pervasive computing will become surround computing. AMD CTO Mark Papermaster had defined surround computing at the Hot Chips conference as “imagining a world without keyboards or mice, where natural user interfaces based on voice and facial recognition redefine the PC experience, and where the cloud and clients collaborate to synthesize exabytes of image and natural language data. The ultimate goal is devices that deliver intelligent, relevant, contextual insight and value that improves consumers’ everyday life in real time through a variety of futuristic applications. AMD is leading the quest for devices that understand and anticipate users’ needs, are driven by natural user interfaces, and that disappear seamlessly into the background.”

Surround computing is multi-platform, from eyeglasses to room-size; fluid, with natural human input and realistic output; and intelligent, anticipating our needs. Su likened it in her keynote to the holodeck from Star Trek. According to Papermaster’s article on it, the implications are profound for computer architecture and networking. We will need smarter clients with realistic, natural human communication and smarter clouds, orchestrating 10B devices in realtime. Su pointed out in her CDNLive keynote that the rapid growth of sensor networks will drive an exponential increase in available data, with another exponential growth in data at local and cloud levels caused by what AMD calls the Internet of Everything (IoE). With 3 billion Internet users already, the data crossing the Internet is expected to grow from 245 exabytes in 2010 to 1000 exabytes by 2015 and 45 zettabytes in the digital universe by 2020.

The other key trends outlined were all related to challenges and solutions enabling surround computing.

First, energy efficiency is critical. Information and communications technology (ICT) could account for more than 14% of global electricity in 2020, while historical efficiency improvements have started to fall off after 2000 with voltage reductions having stalled at the 1V level for the last decade. Instead, power efficiency must come from design and architecture. Su pointed out that between 2008 and 2014, AMD was able to achieve 10X improvement in platform architecture by using a multifaceted approach combining radical new design methodologies while unleashing hardware capabilities with software and intelligent power management.

Second, Moore’s Law is slowing down. Su referred to the “wrong trends” in area scaling and cost per gate when moving from 20nm to 14nm. Lithography begins limiting area scaling at 20nm with fewer metals due to cost and less aggressive feature scaling. And compounded by rapidly increasing lithography costs, for the first time there is no cost per transistor improvement with the transition from 28nm to 20nm.

Intel-and-ARM

Third, and especially relevant to my team and I as system design tool requirements are heavily driven by processor architectures, Su talked about how processor ecosystems will converge and how ARM and x86 will dominate the future of computing. Su showed the graph in the photo I took and attach to this blog (also available online here), with other processor architectures squeezed between ARM and Intel, and the semiconductor TAM between ARM and Intel approaching $80B in 2014. I had mused about this based on IDC market data in my previous blog called “Game of Ecosystems”, with the IDCs number arriving at a similar order of magnitude. AMD’s ambidextrous ARM/x86 approach seems to optimize the market here.

As a final trend, Su called out that heterogeneous computing will be the way to go, integrating processors, DSPs, GPUs, and connectivity onto a single chip and having them work together to improve performance. As examples, Su pointed to every tablet shipped since 2012, 62 of the top 500 supercomputers, the Sony Playstation4, and Microsoft Xbox One. Su then talked about how chip and system-level design make it easier for software developers to harness the entire compute capability of an SoC by changing the hardware to suit the software, enabling popular programming languages to go parallel, and making it easy to be faster while consuming less power. To enable all this, the HSA Foundation has assembled an impressive line-up of members, all working towards enabling heterogeneous system architectures.

Bottom line, I was very happy that Lisa Su’s presentation confirmed our direction. All these challenges are well addressed by our System Development Suite. For example, our system-level tools like the Palladium platform work well for both x86 and ARM architectures – as shown for x86 architectures at CDNLive by AMD’s Alex Starr in a presentation called “Power Event Monitoring at the Application Level Using Hybrid Emulation,” as well as our recent announcement with ARM at ARM TechCon, “ARM Achieves 50X Faster OS Boot-Up on Mali GPU Development using Cadence Palladium XP Platform with ARM Fast Models.” Low power is a key focus in the System Development Suite, especially across the Incisive and Palladium platforms. And given the cost of manufacturing silicon, system verification is a crucial step for project teams to gather the confidence to actually tape out a chip.


Tags:

Leave a Reply


(Note: This name will be displayed publicly)