Smarter Things

The Internet of Things includes more than just household appliances. But what does it mean for SoC design?

popularity

By Ed Sperling
SoC design has largely been a race to the next process node in accordance with Moore’s Law, but it’s about to take a sharp turn away from that as the Internet of Things becomes more ubiquitous.

There has been much made about the Internet of Things over the past couple of years—home networks that involve smart refrigerators sending reminders to consumers that the milk is outdated. While this gee-whiz stuff grabs headlines, the reality is that the chips that drive these devices are hardly at the leading edge of design. Some of them date back to the half-micron processes, where things like layout-dependent effects, electromigration and finFETs aren’t even relevant.

But that’s only part of the picture. The real challenge to making this all work is much less about the wonders of technology and more about basic efficiency and cost in familiar areas such as network management, I/O, integration with mobile devices and software that can fuse it all together. And the real killer application appears to be the automobile, where the ability to eke out an extra 5 or 10 miles per gallon and to stay connected by voice without removing your hands from the wheel or your eyes from the road is critical to safety.

“This enters into the world of complex systems because you have to be able to simulate large numbers of things interacting together,” said Wally Rhines, chairman and CEO of Mentor Graphics. “The complexity increases at least as the square of the number of components. If you count the interactions, as long as you have a limited number of air-pressure sensors, light sensors, motion sensors, you can have dedicated signal processing for each of those sensors. As they start interacting, those interactions have to be verified and analyzed.”

Adding to the challenge, what is developed today may not be what it’s used for next year.

“You put out a network of 100 sensors and you’re worried about the interaction between them and the host,” said Rhines. “And then you decide that you’re going to do some other kind of interaction between sensors. That adds to the complexity.

The Internet of Things also relies heavily on analog at the front end—the part that senses and measures what’s going on around us—tied into a number-crunching digital back end. From a design standpoint, this is nothing new. But the challenge is how you manage all the data collected by these analog sensors without overloading the networks and the servers that need to make sense out of all of this—and still do it in an efficient manner with sufficient performance.

“You can’t just transmit all the data that you collect,” said Frank Schirrmeister, group director of product marketing for system development in the system and software realization group at Cadence. “It’s more of the client/server approach. The bandwidth is not there to deal with that. Bandwidth is certainly growing, but the amount of data is growing much faster than the bandwidth. That means you have to be smarter at the node. This is why every LED has about 800 bytes of software code. In the future, they’ll probably also be adding IP addresses, which will require more code.”

This doesn’t necessarily require new EDA tools, but it may require thinking about how to use them differently. Simulations need to be run not just for what is there today, but in a virtualized environment to understand future uses and the resulting corner cases that might result. Just as traffic on a chip can cause issues, traffic between chips on an ad hoc basis can cause similar problems. To a large extent, this requires thinking of network-connected platforms rather than chips, but ones that are cheap enough and small enough to fit in lots of different devices.

“You see this with the ARM M series,” said Schirrmeister. “Do you use a regular microcontroller or license the M and configure it to your needs. Platforms like this would help address future changes.”

Flexibility rules
One way of approaching the configurability unknowns is through derivatives. If an ASIC costs $30 million to design, a derivative might cost $10 million. But a platform derivative might be only $5 million, according to Naveed Sherwani, president and CEO of Open-Silicon.

“These are all niche markets,” Sherwani said. “So from a chip development standpoint, the volume is not there. It will have to involve derivatives. Those are the chips that will talk to the Internet.”

But flexibility is the key here. Because no one is certain how technology will be used, or how it will be used in the future, it has to account for that. Corner cases may involve unknowns and best estimates rather than fixed numbers. The Internet of Things involves everything from cars to chips that will be used inside the human body or even inside of livestock. As a result, being able to reconfigure connectivity will be essential to avoid obsolescence, particularly in the beginning as the concept begins rolling out.

“You need a waterfall approach because there is so much heterogeneous stuff that needs to be connected,” said Kurt Shuler, vice president of marketing at Arteris. “A lot of this is about connectivity, and standards will be important. But the other piece of this is that it will also involve humans interacting with machines. So it won’t just be one phone controlling the entire house. It will be lots of things working independently and together. So you need processing power for the human interactions—which is where a network on chip matters—but you probably won’t need so much in a device that is simply passing data along. And if you have cars talking to each other, you’ll need lots of busses.”

Conclusion
None of this will sort itself out overnight, and adoption may not be as straightforward as some proponents anticipate. Still, little by little, more things will be connected to more things, and controllable by more devices from more places.

Sun Microsystems and Novell promoted crude versions of these ideas as far back as the early 1990s, when they believed that networked devices could talk and possibly share processing capabilities. More than two decades later, that networking is wireless, processing power is plentiful, ubiquitous and inexpensive, and connectivity is almost a requirement.

But how all of this technology is used—and potentially used together—remains uncertain. Things talking to things make sense in some areas, but the promise undoubtedly is overhyped in others. For technology to take off it has to be inexpensive enough to be accepted by consumers, easy enough to use so that you don’t have to learn to program it, and practical enough that people will actually use it. The Internet of Things will succeed in some areas and fail miserably in others. But from a design standpoint, the more flexibility and connectivity that is built into devices and platforms—and the more that tools can identify potential conflicts across devices and networks rather than just a chip—the less impact failures will have. And the easier it will be to capitalize on successful segments while minimizing the downside of the less successful ones.



Leave a Reply


(Note: This name will be displayed publicly)