Move Over AI, Here Comes AmI

Ambient Intelligence is poised to become one of the fundamental platforms of the IoT.

popularity

Artificial Intelligence has received lots of attention, but a new term called Ambient Intelligence (AmI) is emerging as a cornerstone of the Internet of Things.

The term is a relatively unknown, but not all that new. AmI dates back 16 years—just a few years after the Internet saw its first widespread commercial adoption—when the concept first emerged as a collection of intelligent electronic environments, responsive and sensitive and to our desires, requirements, and needs. It entails ubiquitous sensors embedded into every nook and cranny of our world, heavily populated by gadgets and systems that are capable of powerful capabilities nano- bio- information and communication technology (NBIC).

AmI got its real start in 1998 at Royal Philips of The Netherlands. A consortium of individuals, including Eli Zelkha and Brian Epstein of Palo Alto Ventures (who, with Simon Birrell, coined the name ‘Ambient Intelligence’), described it as “a world where homes will have a distributed intelligent network of devices that provide us with information, communication and entertainment.”

In 1999, Philips joined the Oxygen Alliance, an international consortium of industrial partners within the context of the MIT Oxygen project, aimed at developing technology for the computer of the 21st century. In 2000, plans were made to construct a feasibility and usability facility dedicated to Ambient Intelligence. This HomeLab officially opened in April 2002.

While the smart home was the original vision, AmI has “left the house” for a much more ubiquitous positioning, thanks to Internet dust (see related story). AmI has evolved into a vision of how people interact with technology —everywhere. It is a seamless environment of computing, advanced networking technology such as Internet dust and intelligent interfaces. It is aware of the specific characteristics of human presence and personalities, takes care of needs, and is capable of intelligently responding to spoken or gestured indications of desire. It even can engage in intelligent dialogue (although this will take a while to develop to a level that is realistic).

“One of the more important aspects of this intelligent environment and the IoT is connectivity,” says Ian Morris, principal applications engineer for RF connectivity solutions at NXP Semiconductors. “There are a lot of solutions being developed that will connect IoT devices, both in the home and outside.”

Internet dust
Because sensor and circuit technology has come such a long way, interface devices for the IoT have become tiny – about 1mm². And power for them can be supplied by a number of platforms, batteries, solar, direct connect, even energy harvesting. That means they can be fitted to virtually any product, and for any application.

“There are some very interesting things going on in the area of these low-power, ubiquitous IoT sensors,” notes Morris. “There are really three key technologies. The first one is connectivity, which can be IP, low-power Wi-Fi, ZigBee, RFID, or similar. The second is security, especially if these things are sprinkled all over the place. The third is power, which is really the key to all of this.”

It is reasonable to expect that soon everything can be created with some measure of embedded intelligence — wearables, currency, appliances, vehicles, the paint on our walls and the carpets on our floors, and even some things that can’t (air, water). Expect that networks of tiny sensors and actuators, which some have termed “smart dust,” will be prolific. However, there are issues in all of the key technologies just discussed, that will need to be addressed before the ubiquity happens.

The AmI difference
What makes AmI stand out is that it will provide personalized services, largely via big data, on a scale that will dwarf anything we have seen so far. AmI will surround us with intelligent objects that will understand us, at least to some degree, because the dust and other objects will continually feed information to the “cloud” for analysis and tweaking to our particular environment and circumstances. It also will be able to preemptively assess what we are wanting to do, thereby providing a smooth progression of actions that we want to take.

There are some great visions for AmI. Some may be a bit of a stretch for now, but others can certainly be envisioned. For example, the computing and communications we now have will be interfaced to the sensors and devices on the IoT. The next level of this is that this network will be capable of is both recognizing, and responding to the presence of different individuals and entities in a seamless, inconspicuous and transparent way via a continuous loop of actions (see Figure 1) that begins and ends with sensing.

ami1
Figure 1. The flow of data from input to result

The number of object that sensors can attach to is limitless. In addition, sensors can be mobile – free-floating or detachable. Examples of sensors:

Ambient and wireless

  • Motion (cabinets and drawers, people, animals, bath fixtures)
  • Atmosphere (fire/smoke, carbon monoxide, light)
  • Appliances and plumbing
  • Locks (temperature, sound detection, proximity)

Wearables

  • Health
  • Exercise
  • Clothing
  • Location
  • Virtual

This doesn’t even begin to scratch the surface, but one can get the idea of how many other categories and sub-categories can be sensed.

The elements
Sensing. The first element that needs to be in place is the sensor—and not just any sensor. With AmI the network must be able to respond to real-world stimulus. Components must integrate agile agents that perceive and respond intelligently, not simply pick from a series of scenarios in a data base full of theoretical algorithms (which wouldn’t be realistic for dust, or micro-type sensors with limited resources anyway).

Once the data is captured, intelligent analysis is applied. This is done at a centralized system of one sort or another if the sensor itself is only used to capture, store, and forward data. If the system is distributed, the sensors will have some type of onboard processing power that will pre-process data to whatever degree is designed into the system.

The type of network depends largely on the application. Mobile dust networks, such as those used to monitor a forest fire likely will just report to the central station. Fixed networks, such as weather sensors likely will have some local processing power integrated.

In any network that is somewhat ubiquitous, the data set will generally consist of multiple volumes of multidimensional data. Because systems cannot be made 100% reliable, the system must be able to discern, intelligently, between non-essential data or erroneous data from a noisy sensor, or inference of some sort. Or there may be missing data from a defective sensor. For example, if a sensor fails the data set, or some segment of it may be incomplete. It even can contain multi-dimensional temporal or spatial information.

This is where big data analysis techniques are a useful. Large volumes of sensor data is collected from disparate sources, and part of it may be erroneous or missing. Synthesizing it to produce accurate and rational results requires new methodologies and models (see the big data article) that are now being developed under the big data umbrella. However, today, most sensor data fusion is done with Kalman filters or probabilistic approaches.

One example of this is the In the MavHome smart home project. Collected motion and lighting information alone results in an average of 10,310 events each day. In this project, a data mining pre-processor identifies common sequential patterns in this data, then uses the patterns to build a hierarchical model of resident behavior.

However it is approached, assessment algorithms must be real-time responsive, adaptive, and have the ability to apply a variety of reasoning types, including recognition, user modeling, activity analysis, decision making, and spatial-temporal reasoning.

Modeling. One of the features that AmI integrates is the ability to differentiate between general computing algorithms and specific ones that can adapt to or learn about the user. Such “learning” systems do exist and are fairly adept at do this. Even so, the problem with these systems is that, to do it with any amount of efficiency. They require a deep well of hardware and software resources. That works in many cases, and will work to some degree in AmI. Agile systems envisioned in AmI will need to be able to do this, efficiently and accurately in a small form factor, with the ability to refine and adapt itself on the fly.

The volume of data generated by sensors can challenge modeling algorithms. Adding audio and visual data into the model increases the data quantity by an order of magnitude, at least, but it also adds another dimension of sensed data. For example, video data can be used to find intertransaction (sequential) data in observed behavior or actions, which is useful in identifying and predicting errant conditions in an intelligent environment.

One of the most promising applications in AmI is identifying social interactions, especially with the proliferation of social networking technologies. This has broad implications, all the way from predictive crowd behavior to corporate meeting environments.

Prediction and recognition. These arguably are the two top elements of reasoning in AmI environments. Prediction is accomplished by attestation, from which comes intelligence, which in turn can be used for recognition and, ultimately, prediction. Theoretically, sufficient reiterations of this cycle will increase the intelligence within the networks to near human capability.

For example, in theoretical AmI models such as the Neural Network House the networks use prediction and recognition to control home environments, on the fly, by predicting the location, routes, and activities of the residents, based on previous recognition and other elements, as well. A number of prediction algorithms have been developed that can predict activities for single, as well as some multiple resident cases. These algorithms are relatively adept at predicting resident locations, even some resident actions. The AmI network can, with a reasonable degree of accuracy, anticipate the resident’s needs and even assist, or automate performing the action.

Decision Making. Part of the AmI platform is AI and fuzzy logic. Neural networks are a key element in the decision-making process. Temporal reasoning can be implemented in conjunction with rule-based algorithms to perform any number of functions; from identifying safety concerns to analyzing medical data and adjusting medications, to and diet planning based upon wearable sensor data, to environmental comfort settings.

Temporal and Spatial Components. The Support Elements. These are crucial elements of AI. There is a wide collection of algorithms that have been developed and honed to deal with the various segments of spatial, temporal, and spatio-temporal reasoning. Such algorithms are another element of the network that allows AmI to understanding of the activities in an AmI application.

Any intelligent system relies on either and explicit or implicit reference point of where and when the events of interest occur. For any network to be able to decide on actions, preemptively or real-time, an awareness of what the targets are, is essential. This is where space and time come into the equation. For example, assume a situation is developing where someone left a stove burner on and the temperature around the stove rises. In this scenario, time and temperature have to have a correlation to assess the situation, relative to rate of rise of heat vs. time, location and perhaps even air quality. The network has to understand that this condition is different than, say, the heat coming on, which may produce a similar condition if there is a heating duct near the stove.

Conclusion
There are, of course many more elements to AmI but space and time limit what can be discussed in a paper of this type.

One of the issues that has prevented large-scale development in fields such as neural networks, AI and AmI, is the tremendous processing power required to develop such “intelligence.” However, the current state of technology is about to change all of that. Semiconductor technology is finally crossing the thresholds of capacity, performance, size, and integration. The next few iterations of Moore’s law will see tremendous achievements in technology to support AI, AmI, the Internet and Smart Dust, big data, and the interconnect that goes with it.

After decades of promise, electronic intelligence is finally poised to become a reality—with all the good and bad that will bring.