Figuring out how a device will be used is essential; figuring out how much can fit on a die is no longer the defining factor.
By Pallab Chatterjee
As new process technologies are being developed to make devices smaller, they are also driving the operating power lower for the devices and systems.
The goal is to reduce the power requirements for the system and hence increase the functional life on a single battery charge. This concept has worked in the semiconductor industry from 10-micron processes down to the 65nm process node. Below 65nm, the rules are changing, not because of the process that is available to manufacture the device but what people are doing with the devices.
At the 40nm node and below, the billion-transistor chips are possible. These are not practical at the larger geometries for a number of reasons all related to manufacturability. The trouble with a billion-transistor chip is it does a lot of different stuff and has a lot of computing capability. To support this with a reasonable power factor, the design will support multiple power grids and power controls, so blocks can be turned on and off as needed. This technique helps extend battery life by only running what is needed for a given operation. This is the default direction for general-purpose processor cores and memories in the industry.
For designs that do not need 1 billion devices, a proportionally large chip can be designed for the function at this node because the extra devices have a very small incremental cost. The trouble with adding extra devices such as display drivers, graphics cores and accelerators, and connectivity blocks, is these devices are hard to turn off and save power. It is generally not acceptable to turn off the I/Os and connectivity of an appliance if it will be receiving or transmitting data. There is a low power spec (802.3az) that describes how to power down connectivity, but it requires both sides of the connection to work. These designs also are hampered by the applications that are being run on them in order to balance the power.
If you think of a tablet product, when it is sending or receiving WiFi/3G, the display does not have to be active and can be powered down. However, the connectivity block must be on. But if the content that is being received is streaming video, then the full function of the tablet has to be on to display the graphics, fill the buffers, and handle the connectivity. This changes the battery use model, as it is typically designed for low-duty cycle applications. Watching a streaming video movie does not constitute a low duty cycle application.
Another driver of the power factor is how much resolution is needed. Modern DSLRs routinely operate in the 10+MP still-image market and video is now almost always 1080p. These large datasets and extended streaming times task the low power design, and the chips are not optimized for the high-performance blocks (GPU and NIC) being at 100% duty cycle.
With the release of general-purpose cores for CPUs and GPGPUs, the low-power implementation cannot be limited to bus architectures and power-down blocks. To effectively support the design in a system (smartphone, tablet, netbook) the application has to be considered (gaming, streaming media, office functions, e-mail, web surfing) and the power profile for each application mode optimized. It is this optimization and the steady-state performance, displaying an e-book or streaming a video, that currently drives the power partitioning and the power management methodology that should be used. The verification world now has consider applications above the OS and use models with sensors/MEMS as the main power handling constraints, not just the “How many devices can I put in the box” mentality that has existed since the mid ’70s.
Leave a Reply