Clearing Up Cloud-Based SoCs

SoCs and the cloud are now working in lock step. What does that mean for design?


By Frank Ferro
With each passing month, the cloud is taking the semiconductor market by storm—just like it did in the enterprise years ago. Take nVidia’s recent Kepler GPU announcement for cloud computing. This device provides low-latency access to the cloud for gaming, giving gamers performance and access to the latest content without being tied to a game console. Another example is Applied Micro’s X-Gene new cloud DNA 64-bit ‘server-on-a chip’ for powering the cloud. From the system side, ”Apple puts iCloud at the heart of its OS,” expanding the cloud content to include more photo sharing capabilities and adding video sharing.

Another announcement that caught my eye was technology for thin clients. Although I have seen the concept of thin clients come and go, with all the content moving to the cloud, it does appear that there is a real opportunity to reduce client hardware costs and create a new category of desktop and mobile devices that can harness the computing power in the cloud. For the short term, I find it hard to imagine, with the impressive smart phone and tablet trajectories, that anyone (consumers in particular) would want to give up some of the ‘smarts’ in their smart devices. So for now, I see these thin clients being targeted for enterprise and perhaps some vertical applications.

In any case, as the cloud gets even bigger, it continues to enable new hardware opportunities. And from a semiconductor perspective, having distributed computing in the hands of the consumer means billions of devices, as opposed to central computing, which means many fewer units.

I see the cloud-enabled SoC requirements falling into three categories:

  1. Devices that sit in the network (powering the cloud);
  2. Consumer products, including home and mobile devices at the edge of the network, and
  3. Devices at the extreme edge of the network or Internet of Things

SoCs that power the cloud will have extremely high performance and memory system requirements in order to move, manipulate and enable fast access to data in the cloud. In reality, SoCs for consumer products have a different set of requirements due to the ‘multi-tasking’ nature of these devices. These SoCs need to have robust and reliable connections to the cloud, along with the compute power to process local content for multiple applications. At the same time, the SoCs must be power-aware since many of these end-consumer products are mobile. And finally in the last category, SoCs for the Internet of Things have lower processing requirements, must be extremely power-sensitive, and at times run for years on a battery and at very low cost.

Dealing with Gigas: The first two categories of these types of SoCs, what we refer to as ‘cloud-scale’ SoCs, will need to have processor performance well above 1GHz to meet both the infrastructure and consumer requirements. For devices powering the cloud, it is almost entirely about performance. These are fairly homogeneous processers, although as more functionality moves to the cloud, we will see them take on some the characteristics of heterogeneous SoCs.

Cost is also a big concern for data centers in terms of physical size and heat, so these SoCs need to be very high-performance and power-efficient. SoCs for the consumer market have their own set of unique challenges, given that they have to run multiple applications, multiple operating systems with very low power and cost. Each of these applications often have very different processing requirements; audio and voice need very long talk/play times, while graphics and video are pushing the performance limits with the best screen quality. To accomplish this requires processor speeds of 1-3GHz, GFLOP GPUs and memory access throughput approaching the 10-50GBytes/s.

The Integration Challenge: Given the diverse set of requirements for cloud-scale SoCs, designers are constantly being challenged to bring all this functionality on to the SoC. There are three major challenges here for SoC designers: integration of the multiple heterogeneous cores, memory system design and power management. Fortunately for the designers, more vendors are providing complete IP subsystems, giving them functional blocks to accomplish tasks like video and audio processing. This greatly reduces the number of IP blocks that have to be managed in the system.

While each subsystem usually does a very good job at its unique task, the challenge really begins at the SoC integration level in making all these subsystems work together. This is where the design of the on-chip network and system IP is critical to ensure that each subsystem gets the necessary bandwidth to memory in order to optimize system performance for all applications. In addition to efficient memory design, the on-chip network plays a significant role in power management. Because the network sees all the traffic in the system, it can assist the power manager with hardware to make quick and reliable decisions to power subsystems on and off, thus keeping the silicon ‘dark’ for as long as possible.

Get your Head in the Cloud: Supporting the cloud is very important to the growth of the semiconductor industry. Keeping pace with the changing cloud requirement demands a sophisticated SoC design methodology or SoC platform that is flexible enough to support the diverse set of requirements that the cloud demands. Providing IP integration support is also critical to help SoC companies execute a successful Cloud-scale SoCs strategy. To be true design partners with today’s semiconductor innovators, we all need to get our heads in the clouds but keep our feet planted firmly on the ground.