Making Sense Out Of Convergence

The trend toward combining technologies has accelerated, but what happens next remains a mystery.

popularity

By Ed Sperling

Technology convergence and market consolidation have always gone hand in hand, although not necessarily in ways everyone expects.

The confluence of video and audio was first exhibited by AT&T at the 1964 World’s Fair. The rather crude videophone demonstration promised a future where people could actually see the person they were talking with. Fast forward 45 years and AT&T provides the backbone cabling infrastructure for some of that technology, but the technology has progressed far beyond anything that AT&T could have predicted at the time. And perhaps more important, AT&T no longer has the monopoly on communication.

Much has changed since then. When AT&T introduced the videophone concept, companies such as Cisco didn’t even exist to exploit the capabilities of high-definition multi-way videoconferencing. The founders of Skype—Niklas Zennstrom and Janus Friis—weren’t even born.

But what became painfully obvious to many companies is that the founders of the market weren’t necessarily the long-term beneficiaries. In the consumer space, Apple is the modern-day example. It didn’t create MP3 players, but it did win the lion’s share of the market by setting up a system of micropayments for music—an economic model that that first came into widespread use in public transportation. It also didn’t create the smart phone, but the iPhone has emerged as the one of the most popular.

At the component level, much of this convergence has been enabled by Moore’s Law and the increasing amount of real estate on a piece of silicon when everything is shrunk at each new process node. The fact that a single chip can now contain everything from a cell phone to an MP3 player and GPS system, and still work within a strict power budget to extend battery life would have been considered science fiction a couple decades ago, and highly improbable at the start of this decade. But, then again, so would being able to see and count the atoms of insulation between the wires on a chip.

Blurred lines

The ability to cram more functionality onto a single piece of silicon is now changing our perception of what exactly a chip should be is called. While it was pretty obvious that multicore chips would share the same bus and memory—basically a mini array of processors that could be used by applications that could either run in parallel or be virtualized—it becomes much harder to define when those cores are doing completely different functions and may or may not use the same memory.

Intel and ARM are both targeting the same market using a variety of strategies ranging from 32-bit microcontrollers on ARM’s side to Intel’s scaled down Atom processor. The differences between the two of them is becoming harder to discern, however. The previous definition of a microcontroller was that it had memory on the same chip, while for processors it was external. That definition no longer applies, particularly with systems on chip. In fact, one chip can contain multiple 32-bit microcontrollers, said Dominic Pajak, product manager for ARM’s new M0 microcontroller. He noted that a 32-bit microcontroller actually uses less power than an 8-bit microcontroller doing multiple functions because it involves fewer duty cycles.

“This works exceptionally well in Zigbee applications where you have remote sensors such as utility meters, tire pressure sensors, voltage monitors, as well as in smart phones, cameras, and a whole range of consumer goods,” Pajak said. “There’s even an integrated interrupt controller so you can write routines in C and you don’t have to write assembly code. That makes time to market much faster. And in the design you can have multiple power domains so you can power down a part of the chip.”

Intel’s argument is roughly the same, although its approach is to leverage the Intel Architecture and the code—or at least the coding process—with which many software developers are very familiar. Jonathan Luse, director of marketing for the low-power embedded products division of Intel, said Intel has targeted a variety of new markets now dominated by 32-bit microcontrollers.

So far Intel’s biggest challenge has been in the power budget. The Atom processor runs at about 2 watts, compared with some microcontrollers that supposedly run at a fraction of that number. But the real numbers are a bit fuzzier. A single-lane PCI express, for example, can add 0.5 watts to the number.

“There’s a whole space below where we are today that we have targeted,” said Luse. “That includes billions of units. Some of that is a space that would be relevant to the Intel Architecture. There are also other markets where it’s largely silicon by the pound. That’s not so interesting.”

He noted that at 2 watts, there is room to drive out some power. But he also noted that stripping out power isn’t as clean as it looks, because sometimes removing power at the processor level causes penalties at the board level.

New opportunities

For system-level design this opens up far greater opportunities, however, to play across a variety of markets that were largely off limits to many chip makers. Virtually all the major chip companies now have a presence in the 32-bit microcontroller space, whether it’s with microcontrollers, low-power FPGAs like Actel’s new 65nm platform, or Intel’s Atom processor.

There also is an opportunity to use technology differently. Smart grids, for example, have opened new markets for both 2.4GHz and 900MHz communications. Mark Strzegowski, senior product manager for Analog Devices’ metering group, said the 900MHz technology using Zigbee and powerline works better in multi-unit apartments because the reinforcing bars (rebar) used in the construction of commercial buildings interferes with 2.4GHz wireless signals.

Opportunities also have opened up in adding intelligence into meters, where both Analog Devices and Cypress Semiconductor have created solutions that merge discrete logic and communication with existing technology. Those opportunities will continue as convergence—or at least mashups of technology—continue on a grand scale. What is less obvious is how quickly those collisions will happen, in what markets, and what will be displaced by those changes.



Leave a Reply


(Note: This name will be displayed publicly)