Mobile Applications Drive New Architectures

Effort to reduce power now having effect on larger system designs; higher bandwidth showing up everywhere.

popularity

By Pallab Chatterjee
The push toward mobility in consumer devices is having an impact on the entire component flow.

Mobile devices are dominated by two key factors—an overriding power constraint and very high data bandwidth. The power constraints are on the mobile device side and on the cloud-based support server side. The high data bandwidth issues are due to the limited processing power available and the need to switch between functions, rather than keeping a common memory load and multiprocessing of the data.

The power side for the mobile devices has been discussed in depth. The impact on the rest of the system is less well known. Because mobile devices have to process data on a limited power budget, the support for these devices—the carrier and connection network, and the computing cloud that the device is connected to—has to pick up the slack on the processing front. New custom chipsets and processor architectures are being created to address some high-volume connection tasks such as display view transcoding, security processing and authentication, and sensor/imaging data processing. These chips are making their way into the network connectivity side with multicore being the dominant format for network processors. Also on the networking side, the addition of dedicated, power-optimized AES encryption/decryption blocks allow for secure data traffic on a per block basis with mobile devices.

Also on the power side is the change to high-bandwidth interfaces such as 10G, 40G (organized as 4 lanes of 10G), and 100G (organized as 4 lanes of 25G). While it would appear these interfaces consume more power, the reality is that when implemented in pairs, the lower duty cycle and larger packet size enable low power. For the 100G interfaces, the ability to implement the 25-28G lanes with 32nm and below CMOS offers huge power savings, as the PHY/MAC pairs actually consume less dynamic and active power than 10G lanes implemented in 40nm processes.

The data bandwidth is one of the keys behind the multicore architectures of both mobile devices and server designs. To optimally process data, database access, still images, video content, audio content, gaming graphics, and sensor data (touch screen, gyroscope, GPS, etc.), separate processing engines are usually employed. This is a key driver for multicore where the task base can be continually loaded, and only the data sets get changed. In order to handle the diversity and volume of data sets to be processed, wide- and high-bandwidth data paths are needed. Servers have moved to deep memories architectures to support the cloud computing from smartphones and tablets.

Similarly, the data bandwidth of broadband and wireless are increasing. For broadband, there is a need to put more data per channel on existing lines. This is being done with new wide data architectures that support multiple lanes of SerDes driving the network. To handle the large variety of data that is being presented, new cross-point switch architectures as well as multicore internal bus architectures are changing. These new buses are both externally expandable and support individualized power and data management for each core on the bus.

These different architectures are responsible for the division in use model of the various available cores. Tensilica cores tend to be used in audio processing applications, MIPS and Freescale cores are used in network transaction and security processing, ARM cores are used a generalized CPUs for mobile devices, x86 architectures dominate the main server side and specialty DSPs abound on sensor processing. As the data consumption systems moves to being more mobile-centric, the whole ecosystem from servers to delivery is now shifting to a true ultra-thin client computing model.



Leave a Reply


(Note: This name will be displayed publicly)