How Will Future Cars Interact With Humans?

Adapting technology to the driver can have very different results than training the driver to use technology.

popularity

Future automobiles may come with a set of controls very different from what we’re used to now. Mechanical knobs and switches already are being replaced by touchscreens, but that’s just the beginning.

There are a multitude of other possible ways in which drivers can interact with their vehicles, and the list is growing as technology drives down the cost of this new human-machine interface (HMI). Yet new technology also raises new issues. There are no standards for how these interactions will work, and it may become more difficult to operate an unfamiliar car.

“The options are touch, voice, and gestures,” said Frank Schirrmeister, senior group director, solution marketing at Cadence. That also includes facial recognition, eye detection, and maybe even brain activity as possible ways of interacting.

Different styles of interaction will have multiple possible approaches, but the ones that are most effective and affordable gradually will win out.

Different modes of interaction
Many of us are used to mechanical controls. While radio controls switched from purely mechanical to electronic many years ago, their basic operation remained the same — buttons and dials that worked electrically rather than mechanically.

More recently, higher-end vehicles have used touchscreens on the center console as a single, unified way of controlling many electronic functions. Radio, phone operation, navigation, and other tasks all flow through that interface. So “touch” has replaced traditional controls even as new capabilities not available in earlier cars have emerged.

Voice control allows commands to be issued vocally, with electronics interpreting and executing those commands. Gesture recognition allows hand signals to be used, with a different set of electronics interpreting and executing the driver’s instructions.


Fig. 1: A child playing with a simple non-automotive gesture-recognition system. Source: Comixboy at English Wikipedia

There also are some modes intended for more specialized purposes. Among them:

  • Facial recognition may identify a driver and make any necessary adjustments to the vehicle to accommodate driver preferences. It also could be used for security purposes.
  • Some cameras may monitor driver eye movements in order to detect sleepiness. “There is technology now that gives you a little poke when it realizes that your eyes are closing while you’re driving,” said Schirrmeister.
  • Geoff Tate, CEO of Flex Logix, noted that in addition to monitoring the driver, technology will be used to observe other behavior. “Vision AI will be used for things such as detecting if all occupants have seat belts on or if seats are unoccupied so air bags don’t need to deploy,” he said.
  • The car may communicate with wearables. “Increasingly, we will be able to leverage a slew of mobile and wearable devices to determine driver stats and passenger health, in general, or in an accident to assess the extent of the trauma,” said Dan Cermak, vice president of architecture and product planning at Ambiq.
  • We may get some help in unlocking our vehicles. “We see a demand for key fobs to determine proximity, biometrics to unlock the car, and other advanced features,” continued Cermak.
  • The most advanced mode under research is the ability to “read the mind” – use thoughts to control the vehicle.


Fig. 2: A driver monitoring system on a Lexus. Source: C Ling Fan/Wikimedia

Each technology has its challenges
The move away from mechanical controls is partly driven by weight concerns. “Fewer buttons result in large weight savings due to fewer mechanical buttons and cable harnesses,” said Daniel Goehl, co-founder and chief business officer at UltraSense. That weight reduces the range available on a single battery charge. Newer, lighter technologies will provide for better energy conservation.

Still, controlling a vehicle is becoming more difficult with advancing technology. The old dials and knobs were, for the most part, intuitive and obvious. They may have been located in different places in different cars, but they were easily identified within a few seconds of looking around in an unfamiliar vehicle.

That became more difficult even before electronics became predominant. Some cars from the end of the push-button era placed all controls in a row of identical buttons, distinguished only by inscrutable symbols. A driver would not only have to spend significant time scanning the row to find the right button, but might not recognize all of the symbols, which posed a serious distraction while moving. And it’s only getting more complicated.

“When you look at the vehicle the future, you can imagine that there’s no need for someone to be sitting in a driver’s seat,” said Willard Tu, senior director of the automotive business at Xilinx. “There may be no steering wheel in that vehicle in the future. That will start to cause a lot more innovation in the in-cabin monitoring space, which will end up becoming an entertainment hub, because you’re going to be sitting there and you’ll want to entertain yourself because you’re being transported by the vehicle. But to make this a reality, things need to be a lot more intuitive. Right now, most of the money’s being spent on exterior sensing to help make the car autonomous, but then the next innovation will be in-cabin monitoring, trying to figure out how to interface with the consumer.”

Touch is one such area, which has evolved as it has moved from pressing buttons to using a touchscreen. These screens tend to mimic software in that they’re structured with a series of menus and pages for different functions. This may evolve further to so-called “smart surfaces,” where touch-sensitive surfaces are integrated seamlessly into and around the dashboard. “A touch surface is now being molded into the center console and center steering-wheel panels with button controls,” noted Goehl.

Conventional touchscreens use capacitive sensing. This limits the materials that can be used, and the degree of integration into curvilinear surfaces. “The problem with capacitive is it can be used only with thin plastic as the surface material,” said Goehl. “A capacitive panel can also false trigger if you rest your hand on it in the wrong place or you accidentally spill liquids.”

New technologies are being developed to address this, including ultrasound, coupled with the ability to detect the force with which a surface is touched. “Ultrasound can penetrate virtually any material and any material thickness,” added Goehl. “Z-Force can detect the deflection of the surface material.”

Both voice and gesture also have the potential to eliminate some of the touchscreen challenges — in exchange for others. For instance, each comes with a vocabulary of either words or gestures. Those vocabularies must be memorized for it to work. Different cars will use different technologies built with different vocabularies.

Voice recognition on phones has improved beyond the point where “Call Bob” is interpreted as “Call Mom.” But much of today’s advanced voice recognition uses the cloud to assist in the more compute-intensive aspects. It’s not clear whether this technology will be appropriate in a moving vehicle, where low latency is required for some functions.

In particular, having “always-on” voice recognition has not been approved for native implementation in cars. “With the rapid proliferation of smart speakers in recent years, more voice-enabled applications are fully integrated with devices such as Alexa to enable seamless transition between home and vehicle,” said Cermak. “However, a push-button is still required to initiate the voice command application.”

For the time being, drivers wanting always-on voice must rely on after-market devices. “There have also been devices such as cellphone cradles or dashboard stick-on devices that provide the additional desired functionality, including always-on voice commands, navigation, and Alexa-related services,” said Cermak.

But before a command can be processed, it has to be correctly received. A car is a hostile, noisy environment. “You have engine noise and you have noise in the cabin — people talking and playing music — and it makes recognizing the voice command nearly impossible,” said Yipeng Liu, product marketing group director, Tensilica audio/voice IP at Cadence.

The voice-recognition system must filter out road noise, engine noise, the cabin fan noise, the radio, the children shrieking in the back seat, and all before it can even try to execute a command. As a result, audio developers are working hard to improve the automotive voice experience. “The Amazon Echo has multiple microphones that you stick on your dashboard,” said Liu. “It has multiple microphones that help you to pick up the signal. And it does better noise cancellation, it does better beamforming, and that helps you to communicate with Alexa better.”

In some situations, gestures may be an easier way to interact with the controls. “Voice recognition cannot help you in all circumstances,” said Szukang Hsien, executive business manager, automotive business unit at Maxim Integrated. “If your baby or someone your passenger is sleeping, it’s very awkward for someone to suddenly talk.”

While voice has microphones as a standard way of receiving the signal, gestures are more varied in how they are processed. “Some of them are adopting time-of-flight, some of them are adopting the use of ultrasound, and some of them are based on infrared,” said Hsien. Costs vary widely. Hsien contends that, because of the complexity of a time-of-flight camera, the cost can be 10 times that of an infrared solution.

Gestures have other benefits, particularly in robo-taxis. “Under COVID, you wouldn’t want to touch everything in the vehicle, so you’ll use gesture or augmented reality to take over,” said Xilinx’s Tu. “It will be a lot more about providing content to the consumer, who’s a passenger. Does he want to watch a movie? Does he want to hear the news? As he’s driving around, does he want to know about some promotional deals? All these things can be unlocked with augmented reality. It can also be unlocked with a lot more sensors. Inside the vehicle, you’re going to have several types of cameras. You’re going to have a regular RGB camera, you’re going to have an IR time of flight camera, and you’ll probably have a short range imaging radar.”

But these systems will have to do some interpretation, because there is not a single natural way to gesture a command. It’s very culturally dependent – and it may change by generations. People raised on older volume dials will see rotation as a natural gesture – clockwise for “up” and counter-clockwise for “down.” People raised with more recent linear sliders may feel like a directional gesture is more appropriate. For instance, pointing a finger up for “louder” – or maybe pointing to the right, the way many electronic widgets work.

So just as users of voice control have to learn the commands for their systems, users of gesture control need to learn which gestures the system understands. There’s no way to rely on “intuitive,” because there is no approach that’s intuitive to all. In addition, not all gestures are equally easy to recognize. For example, Maxim said its IR approach can recognize rotation better than other approaches. If rotation is indeed the preferred gesture, then other systems’ lack of that gesture may not be a result of a customer preference calculation. Instead, it may be due to a technology limitation.

Reception varies, but must be carefully considered
Touch tends to be very intuitive because we’re used to actuating controls with our hands. “If you touch something, the tactile control loop is so much better,” said Michael Frank, fellow and chief architect at Arteris IP. “You know that you turned it on. You feel that click.”

But touchscreen controls can be hard to use. One must navigate pages to find the desired function, all while driving. It can be more frustrating when, for safety reasons, some features become unavailable while moving – even if it’s a passenger trying to use it.

“I’ve always been proud of myself for being able to figure things out,” said Liu. “But it took 30 minutes to understand how to tune this radio. I forget how many button pushes. The car is moving. It won’t allow me to use the touchscreen. After many button pushes and knob turns, it gets me to a manual where it’s still not intuitive. It took me probably five levels deep just to try to switch my radio channel. So I don’t feel any safer.”

It also can be difficult to touch the desired spot reliably – especially when a road isn’t perfectly smooth. A poorly timed bump means you touch the wrong thing, and now you’re faced with figuring out what you did wrong and trying to find your way back to what you originally wanted to do – all while driving. “We heard from a customer that a long display is currently the trend inside the dashboard, and it’s very hard for [drivers] to touch the long display,” said Hsien.

With voice, language can be an issue. Today, cars have more limited voice-processing capacity, so accents often are not recognized as easily as “pure” speech. “The way I said certain things, and then it was amplified through the car microphone not being ideal, and then noise coming in, it didn’t work for me,” said Schirrmeister. “Siri didn’t understand me, and then got offended.”

The system also may need to be trained to an individual’s voice. “I’ve done this training for speech recognition for my car,” said Arteris IP’s Frank. “It can take 15 minutes to play with it. And it’s not straightforward.”

In addition, there may be very specific syntax required that may not be intuitive. “My car has a navigation system that’s supposed to have voice control, but I tried it twice and I gave up,” said Liu. “You have to say the words in a certain order. Otherwise, it won’t be able to recognize what you said. Even when you do say it in the required order, it still may not recognize you.”

For forward-looking features, it’s important to look at how today’s youth wants to interact. Ford predicts that by 2022, 90% of new vehicles will have on-board voice recognition, with cloud-based versions available on 75% of cars. But a Research World study found that after eliminating those with no preference, voice is preferred by 75% of 11 to 14-year-olds , but only 44% of 15 to 18-year-olds.”

J.D. Power also did a study on driver acceptance of advanced interaction features. In its results, gesture recognition fell to the bottom of the pack. But this data point may be tricky to interpret. For example, 16% of users tried and then abandoned gestures. Is that because they didn’t like gestures as a mode? Or is it because they had systems that implemented it poorly?

The answer may lie in another number. Some 61% said they used gestures less than half the time. And, in general, the main issue was that it worked inconsistently or was inaccurate. It could well be that more people would use gestures if they worked better.

Advanced driver-assist systems (ADAS) are another way in which the car and the driver interact, and some find those notifications annoying. It can feel like they’re micro-managing the driver, checking and second-guessing every last little detail, pulling the driver’s attention away from driving to “alerts” that aren’t serious. “When I’m trying to cut a curve, because I’m crossing the lines, it goes, ‘vroom! vroom!’” said Frank.

In other cases, the idea is fine, but the systems aren’t smart enough. “I’m standing in front of a traffic light and pedestrians pass,” Frank said. “The proximity sensors go, ‘bing, bing, bing, bing,’ because the software in the car is not capable of understanding I have my foot on the brake, and that I’m not going to run over that pedestrian in front of me.”

A cellphone also may act as a stand-in for a direct connection to the car. Schirrmeister found challenges with a driving app on his phone from his insurance company — something that could have a real impact on what a driver pays for insurance. “It doesn’t know that I’m now a passenger in my friend’s Porsche who constantly goes at 90 mph. I literally looked at it the day after, because it gave me an alert. It said, ‘Frank, you have been a bad boy.'”

There’s also concern that ADAS could do to drivers what GPS already has done. Many people who rely solely on GPS have completely lost their sense of where they are. They no longer follow natural landmarks along a route. Instead, they blindly follow GPS. “People who grew up with maps have a much better understanding of the topology of our environment,” said Frank.

Could that also happen with ADAS? With systems watching each detail, drivers stop worrying about those details because they assume the system will alert them if something is wrong. “If you’re used to driving cars that are not completely automated, or helping you with every little thing that you should have known by heart already, you get distracted by a red light,” noted Frank. “I see a red light in my dashboard, and the first thing I do is to redirect my attention from the road to this thing.”

All of these issues have led to abandonment by some. “Poor early implementations may tarnish an entire technology,” he noted. “Some years later, when systems have matured, users may be reluctant to use them because of past experiences, even though those past issues have been fixed.

Finally, the most futuristic mode – brain-reading – was picked by youth as the way they’d prefer to control their cars. “They basically said, ‘I really don’t want to do anything, I just want to think about it. And I want it to happen,’” said Schirrmeister.

Who’s in charge?
A fundamental issue here is that drivers increasingly will need to be trained to use their systems. How will we learn the commands or gestures that the vehicle accepts? Do we expect drivers to study a manual or pass a test before they can use the car?

Then there’s the question of knowing how to operate an unfamiliar vehicle. There always have been differences between vehicles, but when renting a car, you can sit down inside, give a quick scan to see where the gear lever and heater and radio are located, and in a few seconds, you’re ready to go.

If you have to study a manual to know how a rental car works, as compared to your personal car, will that happen? Will safety be threatened by drivers distracted as they try to figure out how to do something as simple as turning up the heat?

“I twice rejected an upgrade in Frankfurt Airport to a beautiful, big Mercedes car and stayed with my mid-level Audi because I’m driving Audi at home,” said Frank. “If I want to go from Frankfurt to some city I’ve never been to, how do I set up the navigation system? How do I find the maps? Some of these user interfaces are totally convoluted.”

That raises the question of whether a standard set of commands or gestures is needed. “Is it commercially feasible to make enough money with technology that’s standardized across vehicles?” asked Schirrmeister. “We don’t switch vehicles that much yet.”

If voice control evolves to the point of understanding natural language, then studying voice commands no longer may be necessary. But natural-language processing today usually requires a connection to the cloud. “When you’re in the car, you may not have a network connection,” said Liu. “It’s still a challenge to have natural language processing work locally.”

If gesture remains as a mode, however, it appears unlikely that one will be able to hop into a car and know how it works. There is no natural-language equivalent for gestures.

Part of this is good technology in reverse — the technology sets the rules, and it’s up to the human to learn them. “Today it’s the machine that’s controlling what the human can do,” Liu said. “So if you’re looking far into the future, the machine should be learning human behavior.”

You can imagine sitting in a new car and spending a few seconds as the car asks which gesture you’d like to use for various important functions. The car then adapts to the human rather than vice versa.

Conclusion
If the goal of these systems is to keep eyes on the road and hands on the wheel, then extensive testing with real drivers in realistic situations should help to weed out confusing ways of doing things. The challenge is that, if done at the point where there’s a real car to be driven on a real track, significant investment already has been made. It’s extremely challenging to redo an interface. It’s much easier to train drivers.

Realistic simulations can prove what does and doesn’t work for users. No one answer will work for everyone. But if significant numbers abandon the systems, then that investment was wasted. Providing different interaction alternatives – all well proven, with continued improvements on flexibility and accuracy – that suit individual preferences and scenarios will be the best way to give users a good experience and have them coming back for more. Engineers working on audio and visual technologies are helping to improve the underlying technology. But it will be up to those planning the interfaces to put those technologies to the best possible use.



Leave a Reply


(Note: This name will be displayed publicly)