Ambient Computing: Interdependencies Rule

Computers that are always ready, smart enough to anticipate what you want, aren’t science fiction anymore. But they do change the definition of a system.

popularity

By Ann Steffora Mutschler
Ambient computing: Just the concept conjures up images of a Star Trek-like ‘Computer’ that is ever at the ready, awaiting a query at any moment, and which can discern as well as perform significant tasks. While Apple’s Siri gets there partway, it is significant because the concepts that make the technology possible behind the scenes draw upon a multidisciplinary, interdependent approach.

Ambient computing for a human being means whatever they are around—be it a refrigerator or a phone or a watch or sunglasses—all of these places contain computers that are just there and ready. They don’t need to boot up and they can communicate with other devices. “But,” cautioned Kurt Shuler, director of marketing for Arteris, “that’s really challenging for the industry right now.”

One of the reasons, not surprisingly, is power, he said. “From a human interface design standpoint, even though it may be in super sleep mode 99% of the time, when a human being says, ‘Hey, I want to open my fridge,’ it’s got to happen right away. We still haven’t figured out how to do that really well yet.” Shuler suspects this is because chip guys develop independently from software guys who develop independently from device guys, with Apple being one of the few companies that does all three.

But it’s not all science fiction. Cary Chin, director of technical marketing for low-power solutions at Synopsys, observed that we really haven’t been this close on many fronts of ambient computing for a long time and that many things have happened just in the last couple of years.

“This idea of ‘always on’ is just one of the things,” Chin said. “Clearly the idea of ‘always available’ computing is one of the first requirements, and that has a lot to do with all the low power, energy efficiency-related things. This whole idea that the vast majority of systems really should be, by default, ‘off’ but have enough ‘on’ that the rest of the system can be in extremely low power standby and wake up very quickly whenever they’re needed and then go back to sleep. These ideas exactly fit in with the idea of larger system being always available and more recently, within the last few years, integrating that with a mobile solution is another piece.”

Chin sees four requirements for ambient computing to become a reality:

  1. Always Available. He said this is an area where we are doing great. “There’s no doubt that within the next few years more stuff will have happened. This idea of low-power, always-on standby is clearly the way.” His view of the not too distant future is that a lot of devices won’t have on-off switches anymore because it is harder and harder to distinguish between on and off. Most things are always on, but they are not wasting energy when they don’t have to be.
  2. Communications. “There’s a ton of stuff going in there obviously with mobile devices but a lot of it has to be more in the context of extreme low power communications and again, this is an area in the last few years that has taken huge leaps,” Chin asserted. For example, the latest iPhone supports Bluetooth 4.0, a low-energy mode that supports devices that can be powered for years or more for more passive targets in communications. On the Google front, they are supporting more the idea of NFC with the near field communications standards.
  3. Human Interface. This is the man-machine interface, and where Apple’s Siri comes in, which is making great strides toward popularizing a natural language interface. Along with this is the transition to a touch interface, driven by smartphones and tablets. “In this whole human interface thing, we’re kind of in the next revolution and touch is the next piece. I can really envision a combination of a natural language interface combined with either not necessarily even a touch interface, but really this idea of an almost Wii-like interface where you can do these commands in the air because that would make a lot more sense with regard to just entering stuff onto the computer,” he predicted.
  4. Improving Machine Learning and AI. Chin noted that for many years it has been obvious for those in the technology industry that we have gone through this entire generation with the division between humans and computers being pretty much in the same place. “We haven’t really moved that forward. Things move much faster now—computers are way faster, much more storage—but basically the dividing line between what the human is expected to do and process versus what the computer is expected to do hasn’t really changed in the last 30 years pretty much.” Here again, he points to the Siri interface as having made big strides in this area, which is almost a mini version of the IBM Watson computer that plays Jeopardy (http://www-03.ibm.com/innovation/us/watson/index.html). The next step is moving the interface forward to a point where the command interface isn’t based on a command or even on a command and a bunch of aliases. It is interpreting what your intent is and the machine figures out what command, parameters and what engines, etc., are needed.

No more lone wolves
What this means for system architects of the very near future is that they can’t work independently any longer. “It used to be when you had a complex chip design, you’d have your test expert, you’d have your power architect, you’d have your timing closure person, you’d have separate experts that would worry about their axis of the chip and would all work sort of independently to get it done,” said Mike Gianfagna, vice president of marketing for Atrenta. “It doesn’t work that way any more because the minute you lower power you potentially mess up testability, and the minute you change testability, you might mess up your synchronization schemes for the clocks. So everything is interdependent. You can’t have a team of people working independently and somehow get it done. The experts need to be enabled to work collaboratively and understand the implications of what they do on one thing and how it affects something else. This requires more concurrent engineering and requires the various optimization tools to work in concert with each other and concurrently.”

He noted that the industry has talked about concurrent engineering for a very long time but it hasn’t been a need-to-have. Where it really becomes a need-to-have is around 22nm because, “You just can’t get there from here. You’ve got to co-optimize everything or you can’t close the design. Concurrent engineering and the need to balance all these things simultaneously become critical. You still have your experts but the experts need to be able to work more collaboratively and that only works if the tools can give you real-time feedback on if you change timing, what happens to timing, power, area, testability.”

In essence, to make ambient computing truly a reality, all parts of the ecosystem—from device to network to cloud—are completely reliant on each other for success. Realizing ambient computing requires some lateral thinking and reinvention in the entire electronics industry. But this is exactly what we will see in the years to come.

Additional reading:
Ambient Computing Blog: Where the Wild Things Are
A look at Apple’s Siri
How Speech Recognition Will Change the World



Leave a Reply


(Note: This name will be displayed publicly)