Rethinking The Sensor

As data gathering becomes more pervasive, what else can be done with this technology?

popularity

Fantasy robot's eye. 3d illustration

Sensor technology is beginning to change on a fundamental level as companies begin looking beyond a human’s five senses, on which early sensors were modeled, to what can be done with those sensors for specific applications.

In some cases, sensors don’t have to be as accurate as the sight, smell, touch, taste and hearing of a person. In others, they can be augmented to far exceed human limitations. And while the human brain remains more efficient and effective at certain operations, such as adding context around sensory data, sensors connected to digital logic can react more quickly and predictably to known stimuli.

Most early vision technology, for example, came out of medical research. The primary goal was to cure blindness or compensate for impaired vision. Machine vision has a different purpose. Rather than striving for visual acuity that is as good or better than a person’s eyesight, current efforts add the ability to sense objects in the non-visible spectra, such as infrared imaging, or radar to detect objects around corners or other objects that are not visible to people.

“We limit ourselves if we think of this as human vision,” said , managing partner of Lanza techVentures. “Once you start thinking of this as machine vision, which is perception of different phenomena, it opens up a whole new level of opportunity.”

There has been much work in embedded vision in the automotive and robotic sectors. Each has radically different goals than human vision. But they also have distinctly different goals from each other. A robot needs to recognize patterns in a person’s face and the surroundings in order to distinguish a human from a statue, or a ledge or irregularity in a floor versus a carpet. A car needs to recognize the speed at which a car is moving, other objects that might cross its path, and how to react in a tiny fraction of a second.

This becomes harder with conventional vision approaches. Craig Forest, CTO of Arteris, noted that sensors can be blinded, just like people, with the worst problems occurring at dusk. “A lot of work is required to keep data reliable. If there is an error, you need to make sure you propagate that back to the ingress point.”

But people can put on sunglasses, put up their hand, and still comprehend what’s going on. A machine cannot unless that is programmed into the hardware and software and the issues are taken into account at the architectural level. So automatic garage door openers and interactive traffic lights regularly process faulty data when blinded by the sun, while a person would not be fooled.

The solution is to add more intelligence into image sensors, and this is where many semiconductor companies are focusing their efforts. For one thing, it’s one of the most lucrative areas for chips these days, and margins continue to hold due to high demand and constant changes in the technology.

“There is massive growth in this area for surveillance and automotive applications,” said Wally Rhines, chairman and CEO of Mentor Graphics. “It now represents 3% of the total available market for semiconductors. That could exceed 5%.”

The upside is based not only on the need for more image processing, but the need for surrounding logic that is sophisticated enough to recognize patterns.

“This is why there is so much interest by Google, Facebook and Microsoft in computer vision and convolutional neural networks,” said Jen-Tai Hsu, vice president of engineering at Kilopass. “The IoT is a revolutionary trend. The entire view of technology is different. It’s not just a memory or a processor, and it’s not just about computing in speed or power.”

What exactly is that smell?
Research is underway for all five electronic senses, but the real money so far has been in the image processing arena. Olfactory sensing until very recently was given short shrift, largely because what has existed in the past was good enough for industrial purposes such as detecting gas leaks. That is changing as new opportunities open up in in the industrial medical fields.

Nose of a hovawart dog, smelling something. Very shallow DOF.

“A dog feels an earthquake before a human,” said Lanza. “It also can smell cancer. If you can sense what’s going on with human skin, from the smell or sweat, you can get enormous amounts of very valuable information.”

The existing methods for analyzing odors rely primarily on mass spectrometry. Air or gas samples are ionized and then run through a magnetic field and down a column to separate them out based upon their mass-to-charge ratio. Like most mechanical approaches, the equipment is large and cumbersome and hasn’t changed significantly in decades.

Researchers are now working on a different approach called rotational spectroscopy, which can be implemented on CMOS instead of requiring a special machine.

“With rotational spectroscopy, you collect the molecules in a gas state and vibrate and spin them,” said Kenneth O, professor at the University of Texas at Dallas and director of the Texas Analog Center of Excellence, which is funded by Semiconductor Research Corp., Texas Instruments and Samsung. “Depending on the shape of the molecule, there is a preferred axis of rotation and energy state. If you change the energy state, you can transition from one rotational state to another.”

That generates electromagnetic waves, which unlike mass spectrometry can be measured in extremely narrow bands. The result is that with mixes of multiple gases, the delineations between frequencies are sharp enough to be able to pick out multiple gases rather than just one.

“The goal here is specificity at an affordable cost,” said O. “Right now it costs about $150 per line (single gas identification) and machines run as much as $80,000. If we can implement this on CMOS, we can sell chips for as little as $500 to $1,000 using sensors tuned for one molecule. And this isn’t leading-edge technology. We’ve built prototypes at 65nm.”

txace-device-1000-2016-06

Electronic nose. Source: University of Texas at Dallas

He said that with mass spectrometry, even the best equipment has trouble detecting different molecules because the lines between them are too wide. That leads to overlap and some guesswork. “If you have many different molecules, it’s difficult to detect one. But with diseases or indoor air monitoring, for example, that’s exactly what you need to do.”

The technology proved so accurate that one student went to a tavern and consumed a couple drinks to analyze his blood alcohol content (BAC), and the researchers discovered the tavern was diluting its liquor because his BAC was below what it would have been if he had consumed a single beer. That same capability can be used to measure everything from blood glucose levels to how much marijuana a driver has consumed and when.

The brain behind the senses
Similar work is underway to digitize the other senses so the data can be effectively processed and mined. But the bigger question is what else can be done with all of this data.

“There are a lot of places for growth in the near-term,” said Mentor’s Rhines. “Longer term, there will be an inevitable infusion of knowledge downward. This has been the pattern from mainframe to minicomputer to terminals, and over time to peripherals like network controllers and optical drives. It always starts with the intelligence being centralized, which is why we have massive data centers and why everyone is designing a gateway. But that’s an intermediate intelligence. Ultimately, it will diffuse down to semiconductors and actuators. When we finally get to the IoT, there will be a lot of device intelligence.”

That will be required for a couple reasons. For one thing, it’s faster to process data locally rather than send everything to the cloud. And second, much of this data needs to be mobile, where connections are not always guaranteed.

“There are times when you’re in a good, fast low-latency network and times when you are not,” said ARM CEO Simon Segars. “It doesn’t all go into the cloud. There’s a tradeoff between what’s remote and what’s being done offline. Nothing is more frustrating than when you’re talking into your device and there is no network connection. But it does require more compute power, different types of processing. We’ve spoken about moving a big CPU to a more distributed computing environment where you have accelerators for specific functions. That’s how you get power efficiency.”

That puts a lot of pressure on chip architects to continue finding ways to make processing more efficient at every level, including the sensor. Sensors will play a key role in this shift, and the interaction between sensor, processor, software and overall architecture will need to evolve in ways to make all of this more efficient and more portable.

“If a sensor is always on, you can customize it to poll the data,” said Amy Wong, director of marketing for Marvell’s IoT Business Unit. “So if you have a sensor library, you write that to an API and you can customize a wearable or a sensor for something like calculating blood pressure. It’s not just about the chip, though. It’s integration, framework and software development, and it’s being able to do some of the work in batch. So if you look at a watch, it’s idle 95% of the time. The CPU and other pieces can be designed to do things differently. If you calculate that over the battery life, you can extend the battery life by at least 20%. This is a process, and you need to understand the process as well as the architecture.”

Context and issues
There are enough new capabilities being developed in sensors to generate plenty of new ideas, but the reality is they have to run in the context of a system, which in turn needs to function in the context of other systems.

“We have moved from connecting computers to connecting people, which is what’s happening today,” said Lanza. “At a certain point, we are going to be connecting things and we will need a set of rules to find which things are good things and what security is needed based on what we think is the right way to do things. We will have a lot of work to make a society of things consistent.”

This is a massive job, and it affects how data is collected and shared and what is ultimately done with that data.

“The proliferation of sensors and the growth rate of data will be enormous,” said Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus. “This is more data than can be moved back to the data center, though. It will require more edge computing, where there will be filters or pre-processing. So you basically can have simple processing to get to more meaningful data.”

Woo said that could require a different way of looking at data for edge devices, as well. “You may start to see more machine learning in the end points, where you scan information and learn the important events about that data and send along consolidated information. There are ways you can do that with reasonable security back and forth over the air.”

Designing sensors inside of systems isn’t a straight line, either. There is no clear roadmap for how this technology will be used and what can be done with it. That will require massive innovation at every level, and while that makes system engineering much more interesting, it also adds levels of uncertainty into the tech business that haven’t been seen since the introduction of the PC.

“In the old days, Intel said what its next processor would be and the rest of the industry followed,” said Kilopass’ Hsu. “In the future, people will define technology, not the high-tech companies. IoT is the next big thing, and we will find more and more applications that will need high tech. But IoT is also difficult to unify, so it will be very difficult for today’s dominant players to fill that market.”

Related Stories
The Trouble With MEMS
Severe price erosion is putting this whole sector under pressure at a time when demand is growing.
Flexible Sensors Begin Ramping
Technology opens up new possibilities for the Internet of Everything.
Sensors Enable ADAS
Advanced driver assistance requires a careful and complex balance of hardware, software and security.



Leave a Reply


(Note: This name will be displayed publicly)