Automotive’s Unsung Technology

Audio technology is making big strides alongside autonomous vehicles and vehicle-to-infrastructure communication.

popularity

Sound systems are becoming a critical design element in vehicles, and not just for music. Thanks to evolving technology, automotive audio has reached a point where it is taking on a much broader role for applications both within and outside the vehicle.

Most people associate automotive audio with the car radio, which has been a fixture in cars for decades. But in the future, these systems also will listen and respond in real time—and they will play music more selectively in different parts of vehicle.

“Traditionally, audio in the car was one-sided, where humans would listen to information or to music or to entertainment,” said Anil Khanna, senior manager for the automotive audio business line in the automotive business unit at Mentor, a Siemens business. “You have your radio on and you are listening to music while you get voice commands through GPS prompts. Now, audio is becoming bi-directional. You can see that very clearly on the consumer side with the emergence of technologies like Amazon Echo, where a person is commanding a device to do something. That’s happening in the automotive audio space, as well.”

Indeed, audio is a very important part of the automobile experience today, and it will become even more critical as cars begin to implement driver-assistance and eventually autonomous driving capabilities.

“The audio subsystem plays a role in infotainment, noise control, sound design, and communications,” said Gerard Andrews, product marketing director, audio/voice IP, Tensilica products at Cadence. “The infotainment system usually consists of a variety of audio decoders and post-processing technologies that provide a compelling user experience. An immersive audio experience is so important to many car buyers that often the audio system is allowed to be branded by a company other than the car manufacturer.”

At a time when automotive electronics are experiencing explosive growth, this segment is just beginning to gain traction—but not for the obvious reasons. Last November, the National Highway Traffic Safety Administration (NHTSA) passed a mandate that requires new electric vehicles and hybrid electric vehicles to have an active pedestrian alert system by the fall of 2018. This means dedicated audio technology, which didn’t even exist in cars until about two years ago.

Since then, technologies such as Analog Devices’ Automotive Audio Bus (A2B) have joined consumer-focused options like MOST (media-oriented systems transport), Ethernet AVB, HDBaseT, and others. MOST is the legacy audio bus technology and is found in almost every car brand around the world. It has gone through three evolutions: MOST 25, MOST 50, MOST 150. The number is a measure of the bandwidth, so MOST 150, introduced in 2007, can carry audio (or video) data at speeds of 150 megabits per second.


Fig. 1: Analog Devices’ A2B schematic. Source: ADI

“Even now, some carmakers use analog audio lines to transmit audio,” Khanna said. “Think of the home theater system. There is an amplifier in the front under the TV, two speakers behind you on the floor, all connected through cables. Every speaker has a serial left and right, and there are cables running all over the room. That’s exactly how it is in the cars today with legacy analog audio transmission. It’s either that or MOST, and that’s it. High-end audio systems use Ethernet AVB (audio video bridging). Again, that’s a borrowed bus. It’s not dedicated to audio. It transmits audio and video. Until A2B came along, which is solely dedicated to transmitting 32 channels of digital audio, there were 32 wires going all over the car. Now this is done over two pieces of wire.

Going forward, technology requirements such as higher bandwidth are driving the advancement of new and emerging bus technologies like Ethernet AVB and others. The bandwidth needs are coming from the requirement to drive video specifically. Audio is very low bandwidth, but as soon as video is added a tremendous amount of bandwidth is required. If real-time video is required, that’s even more data, bringing latency and guaranteed timing into the discussion. For example,  an in-vehicle infotainment (IVI) system with a rear-screen entertainment unit has up to two screens in the back of the vehicle and one screen in the front for the driver’s IVI head unit. When transmitting video throughout the car, a technology that can support higher bandwidth for both audio and video is needed.

High bandwidth is required for surround-view cameras, as well. “Some cars have a bird’s eye view, so you have input coming from four different cameras,” Khanna said. “In this case, you want to be real-time because when you’re parking your car, you want to see what’s around you.”

Video not required
But there also is a growing number of advanced automotive applications that do not require video, and there are good reasons to separate them.

“Applications exist today, and applications are being created just because now we have this dedicated bus,” Khanna said. “For example, the hands-free system. Today, there are microphones next to the driver. You dial by name. Typically, when we think of hands-free we always think of the driver as the primary person who is talking on the phone and no one else is talking. But all of that is changing now. We are seeing microphones proliferate in the car where there are personal audio zones set up. Literally, you will see four audio zones set up. There will be some interference, but not a lot, with the help of software algorithms and noise-canceling and noise-reduction algorithms. So you can form personal zones where someone could be watching their own favorite TV show, somebody could be talking on the phone, and they would have minimal interruption between the two. I’m not saying it’s coming out next year, but I can assure you that research is done and there are companies that are doing this. We are definitely looking to that as well.”

Harmon is one of these. It already touts its ISZ (individual sound zone) technology, which is all audio-related. It uses a combination of microphones and noise cancelation algorithms to create personal zones.

YT Wang, CTO and president of Archband Labs, points to additional areas of activity:

  • The move from analog to digital. More functions with higher standards are required to ensure that the overall performance and cost can meet market demands. Traditionally, car audio signals have been distributed through high-quality analog cables, but those are expensive and susceptible to environmental noise.
  • Configurable and expandable channels. With more features packed into personal cars today, one trend is to build high-quality configurable audio channels for recording and playback. For example, a driver may want to listen to the radio while a passenger may want to use his or her AR/VR/TV. Optimized channel partitioning is essential for this, and IP can be expandable to multiple channels for SoC integration and process porting.
  • Hands-free intelligent voice-in-command. IP is now available to support hands-free operation for low-power voice detection. This technology already can be used to detect human voices in noisy situations such as high wind. When voice activity is detected, the IP wakes up the SoC system’s natural language search engine for keyword detection. And if a keyword is detected, the whole SoC system is woken up to take in driver’s commands and play back machine-learning responses.

Cadence’s Andrews also is seeing the move in automotive audio to use the audio subsystem to run active noise cancellation algorithms to reduce cabin noise.

Converging problems and context
Being able to isolate physical stimuli and decipher their meaning is an ongoing quest inside of many automotive systems these days. A simple solution in audio is to match a sound against a database of known words or phrases. The same can be done with simple images in a non-safety-critical video subsystem. A more complex solution is to use neural networks to react in a way that is closer to how a human perceives and reacts to stimuli. The problems are similar, though, across a number of sensor types, and the solutions depend upon the end goals.

No matter how these problems are tackled, though, the common denominator is that data needs to move faster, and it needs to be interpreted more quickly so that appropriate action can be taken.

“There are two performance metrics for on-chip and chip-to-chip communication,” said Steve Mensor, vice president of marketing at Achronix. “One is bandwidth, and that can be inside the chip or across a PCB. The second is elimination of the latency as data moves to the end destination.”

This sounds simple enough in isolation, but many of these problems need to be addressed at a system level. Sensor fusion can create interference because multiple signals are competing for resources. In addition, system updates—particularly over-the-air software or firmware updates—can affect the overall performance the same way software updates can slow a computer or mobile device. All of that needs to be considered in the design.

“Two things to remember here are that any source of information is never perfect, and anything new will change,” said Mensor.

For car companies, that may be a good thing. For engineers developing any subsystem, it can cause serious headaches.

“The car itself is good for 20 years, but now you have a car that is three or four years old that is missing all of this new technology,” said Charlie Janac, chairman and CEO of ArterisIP. “So one of the unintended consequences is a faster upgrade cycle. Or maybe you lease a car for three or four years and then you turn it in for a new model.”

Car systems also need to have resilience built into them because at some point some component will fail. “But many systems are not designed for reliability,” Janac said. “If your smartphone fails, you may be annoyed, but you go out and buy another one. A smartphone won’t kill you, but a car can. These systems have to be thought through for functional safety and resilience all the way through from the lowest denominator, which is the IP,  to the user. If you look at the satellite electronics, that doesn’t fail very often. But it does get bigger and it costs more.”

Conclusion
In a car, audio is just one piece of a complex collection of interconnected systems. The attraction of a separate audio bus is that it limits unexpected or unwanted interactions between systems. And as cars become more autonomous, it also may be the system that people interface with the most, which is why noise cancellation and personalized audio zones are getting so much attention.

“Road noise cancellation is something we will likely see within the next three years,” said Khanna. “We know that some top OEMs are looking at this very seriously. Previously, the barrier to adoption was the high cost of the individual components. A2B has changed that. Just like any technology, first the technology comes out and then people wait and see how this is going to be adopted. Once they feel like it’s getting traction then you will see the Tier 2 players jump in who make components for the technology that is required. There are companies out there that have already started to manufacture A2B-compliant microphones. They are microphones—tiny, the size of a penny or a quarter. These tiny things are daisy-chained together to significantly reduce the amount of wiring in the car, which reduces the weight of the car.”

And as voice—and especially natural-language communication—becomes a more important user interface in vehicles, it also will grow in importance. Voice recognition technology is already found in many cars. The next step is to continually refine that technology, allowing much more complex interactions with a minimum of disruption.



Leave a Reply


(Note: This name will be displayed publicly)