Innovations In Sensor Technology

Better management of data, increased accuracy, and lower power approaches top the list.

popularity

Sensors are the “eyes” and “ears” of processors, co-processors, and computing modules. They come in all shapes, forms, and functions, and they are being deployed in a rapidly growing number of applications — from edge computing and IoT, to smart cities, smart manufacturing, hospitals, industrial, machine learning, and automotive.

Each of these use cases relies on chips to capture data about what is happening in our analog world, and then digitize the data so it can be processed, stored, combined, mined, correlated, and utilized by both humans and machines.

“In the new era of edge processing across many different application spaces, we see multiple trends in the development of next-generation sensor technology,” said Rich Collins, director of product marketing at Synopsys. “Sensor implementations are moving beyond the basic need to capture and interpret a combination of environmental conditions, such as temperature, motion, humidity, proximity, and pressure, in an efficient way.”

Energy and power efficiency are critical for these applications. Many rely on batteries, share resources between different sensors, and they often include some type of always-on circuitry for faster boot-up or to detect motion, gestures, or specific keywords. In the past these types of functions typically were built into the central processor, but that approach is wasteful from an energy perspective.

“In a variety of different use cases in the automotive, mobile and IoT markets, developing a flexible system optimized with dedicated processors, as well as hardware accelerators offloading the host processor, seems to be emerging as a basic requirement,” Collins said. “Communication of sensor data also is becoming an essential feature for many of these implementations. And since communication tasks are often periodic in nature, we see the same processing elements leveraged for the sensor data capture, fusion processing, and communication tasks, enabling more power-efficient use of processing resources.”

This is especially true for the automotive industry, which is emerging as the poster child for sensor technology as it undergoes a transformation from mechanical and analog to electrical and digital. A new car’s engine control unit (ECU) now controls everything from acceleration and deceleration to monitoring the vehicle inside and out. Advanced driver assistance systems (ADAS) rely on different types of sensors mounted on the vehicle to gather the data the ECU requires for making decisions.

That will become more critical as vehicles add increasing levels of autonomy over the next decade or so. Sensors will play important roles in meeting safety, security, and convenience objectives.

Data in motion
On the automotive front, advancing from SAE Level IV (semi-autonomous) to Level V (fully autonomous) will depend on a process in which individual smart ECUs use sensor-collected information to make decisions. That process has three major steps:

  • Sensor electronics must accurately turn analog signals into digital data bits.
  • Each ECU must be able to interpret that data to make the appropriate decisions, such as when to accelerate or decelerate.
  • At the fully autonomous driving stage, machine learning will be applied, enabling self-driving vehicles to navigate in various environments.

In effect, the vehicle needs to “see” or “sense” its surroundings to stay on the road and to avoid accidents. Today, these signals come from light, electromagnetic waves, infrared (IR), and ultrasound, and must be converted to digital data bits.

A vehicle uses a combination of sensors — for example, lidar, radar, and/or HD cameras — to detect objects, including other moving vehicles, and determine their distance and speed.

Automakers have not yet standardized on what sensors are best for autonomous driving. For example, Tesla is taking a vision-only approach for some of its newer autonomous models by combining AI supercomputers and multiple HD cameras. One such supercomputer consists of 5,760 GPUs capable of 1.8 exaFLOPS (EFLOPS) with a connection speed of 1.6 terabytes per second (TBps). The storage has 10 petabytes capacity. Essentially, Tesla’s aim is to use machine learning to simulate human driving, an approach that requires intensive training and learning.

Fig. 1: Different attributes of automotive sensors. Source: IDTechEx

Fig. 1: Different attributes of automotive sensors. Source: IDTechEx

Not everyone agrees this is the best approach. Because sensors have limitations, other carmakers believe it is best to combine multiple sensors, relying on sensor fusion to come up with the most accurate information for decision-making.

“We are currently seeing the use of a combination of one or more of sensors to assist driving,” said Amol Borkar, director of product management and marketing for Tensilica Vision and AI DSPs at Cadence. “If we consider external viewing, then ADAS or autonomous vehicle applications typically use the data/information gathered from these sensors to make decisions about various elements in the vehicle’s surroundings. These could include street signs, pedestrians, objects and debris, traffic lights, road and lane markings, etc. By using a combination of sensors, there is more visibility into the environment than a single sensor can provide. For example, a camera by itself has difficulty seeing the road in rainy and snowy conditions, but by pairing it with short/long range radar and/or lidar, that problem mostly goes away.”

Correctly interpreting this data can be a matter of life and death. When humans drive in the rain and spot an object at night on the freeway, they typically slow down because it is hard to tell what the object is. They need to know how big the object is, and whether it will damage the vehicle if they keep driving. But a supercomputer asking the same questions may come to a different conclusion.

In a well-publicized fatal crash in 2016, the Tesla autopilot system could not “see” a white truck, thinking it was the color of the sky. The deadly accident illustrates the importance of the interactions that must take place among sensor data, autonomous driving interpretation, and decision-making. If the autopilot system had any doubt, it would have slowed down and avoided the fatal accident.

Sensor data interfaces and architecture
Under the hood of modern vehicles is a network of electronic control modules (ECMs) for many different functions. Sensors are connected to these ECMs, which in turn are connected to each other. Traditionally, the controller area network (CAN) was used as the central data path with different application protocols running on it. Currently, there is no application programming interface (API) standard for ECM. However, development of the OpenXC standard is underway, led by Ford. With OpenXC, proprietary CAN messages would be translated to the OpenXC message format to be sent to the host device via various interfaces such as USB, Bluetooth, or 3G/Ethernet/Wi-Fi.

An emerging mobile industry processor interface (MIPI) standard supports sensor connections with enhanced automotive interfaces. The organization behind MIPI is the MIPI Alliance. Founded in 2003, the 400-member, international organization has released multiple standards. Members include mobile device manufacturers, software providers, semiconductor companies, IP tool providers, automotive OEMs, Tier 1 suppliers, and test equipment companies. Companies participating as board members include Intel, Texas Instruments, Qualcomm, and Robert Bosch GmbH.

The MIPI Alliance’s plan is to develop a comprehensive set of interface specifications for mobile and mobile-influenced devices, including automotive sensor interfaces. These specifications will cover various interfaces within the vehicle, including:

  • Physical layer
  • Multimedia
  • Chip-to-chip/IPC
  • Control and data
  • Debug and trace
  • Software integration

Fig. 2: The MIPI Alliance specifications will cover the various interfaces within the vehicle. Source: MIPI Alliance

Fig. 2: The MIPI Alliance specifications will cover the various interfaces within the vehicle. Source: MIPI Alliance

New sensor interface standard
The MIPI Alliance released the A-PHY v1.1 standard for review on Oct. 11, 2021, and it is expected to be adopted within the next few months. This standard, which focuses on the automotive serializer-deserializer (SerDes) physical layer interface, doubles the downlink data rate of the previous version to 32 Gbps. The uplink gear data rate will also be twice as fast (200 Mbps).

A first in the industry, the A-PHY v1.1 standard provides a long-reach, asymmetric SerDes interface between automotive image sensors, displays, and the ECUs for ADAS, in-vehicle infotainment (IVI), and autonomous driving systems (ADS). Additionally, it will also be adopted as the IEEE 2977 standard.

Fig. 3: The ECU is connected to sensors and displays. Source: MIPI Alliance

Fig. 3: The ECU is connected to sensors and displays. Source: MIPI Alliance

“As radar and lidar sensors are widely used for ADAS-type applications, we expect MIPI to continue to make a significant contribution in standardizing sensor interfaces,” said Ashraf Takla, CEO and founder of Mixel. “Additionally, image sensors are used to generate video of the surroundings for the drivers, as in the case of backup cameras. Those image sensors all use MIPI interfaces. The MIPI D-PHY (and more recently MIPI C-PHY), with a MIPI CSI-2 TX or MIPI CSI-2 RX controller, is used on both the sensor and processor sides of the link. As of today, CSI-2 is still the de facto standard for many camera and sensor applications. With the number of sensors in cars continuing to increase in volume and in complexity, we expect the need for MIPI standards to continue to increase.”

Fig. 4: The MIPI specifications provide detailed vehicle interfaces, including chip-to-chip connections. Source: Mixel

Fig. 4: The MIPI specifications provide detailed vehicle interfaces, including chip-to-chip connections. Source: Mixel

From there, connecting the sensors is relatively straightforward. “There are two ways to approach this,” said Cadence’s Borkar. “If the sensor is part of a larger solution — for example, the sensor and compute unit are in the same package, like in a driver monitoring systems (DMS) or most ADAS solutions — then the sensor typically interfaces using MIPI and SPI, which are well-known industry standards for high-speed data. The data is received from the sensor, and then processed by a neural processing unit (NPU) or digital signal processor (DSP). There are some solutions, like 360-degree or surround view vehicle camera systems, which use an array of four or more sensors to build a 360-degree view around the car. In such cases, the sensors that are away from the compute unit typically use CAN bus or high-speed Ethernet to send the images or data to the compute unit for further processing.”

Sensor outlook
Automotive sensor technologies are not perfect. That is why the sensor fusion approach of using multiple sensors is preferred by many. There are still challenges to overcome as we move toward fully autonomous driving.

“Automotive sensors will continue to improve to address their shortcomings,” said Ted Chua, director of product management and marketing for Tensilica DSPs at Cadence. “Radar sensor makers are working to improve key components, such as angular resolution and object classification capability. Lidar sensor makers are focusing on innovative solutions to aggressively reduce the cost and bring about mass manufacturing capability, which relates to cost. As mentioned above, in future AVs all the sensors will need to work together cohesively. Hence, the signals from the various sensors will need to be fused together for decision-making.”

All of this will take time, though. “Today, most sensors are ASIL-B certified for functional safety,” Chua said. “But for L4 and L5 autonomous vehicles, will ASIL-B sensors still be sufficient given the safety goals? After all, in an AV there is no human behind the wheel to be the back-up for safety requirements. Do the future sensors need to be ASIL-D certified? How do we smartly design the ASIL-D requirement for AVs in a cost-effective way? These are just the tip of the iceberg of considerations and challenges we must address to achieve full autonomy for our future AVs.”

Future advances
For the chip industry, this poses a challenge, but it’s a potentially lucrative one. Future sensors will need to be more compact, lightweight, accurate, and reliable. But to achieve driving safety, automotive data — including the data generated from the sensors — must be secured, both in place where it is stored and in motion.

“There are two main ways sensors send their signals to the processing units that will analyze their data — via a network or by a direct wire,” said Thierry Kouthon, technical product manager at Rambus Security. “Following signal receipt, the processing units make decisions that will influence the behavior of the vehicle or alert/notify the driver. In the case of a direct wire, technologies like MIPI-A PHY can be used with high-bandwidth sensors such as lidar, radar, and cameras. Today, MIPI does not support authentication and confidentiality, which are necessary building blocks of a sound security solution. In the context of a vehicle, deciding if the high-speed connection between the sensor unit and the processing unit must be protected using authentication and confidentiality is left to the manufacturer. Manufacturers that want to enforce a secure connection will use network protocols such as Automotive Ethernet. Automotive Ethernet can be protected using MACsec (IEEE 802.1AE Standard). MACsec has the advantage of operating at line rate with virtually no impact on performance or latency. Other higher-level protocols are also eligible, such as IPsec. Both MACsec and IPsec offer authentication and confidentiality.”

Conclusion
From CAN bus to MIPI, sensor interface development has come a long way. In the future, it is expected that 5G and V2X will be part of the sensor interfaces, as well. V2X envisions vehicles connected to each other to increase safety. Currently when an accident occurs, say 1,000 yards ahead on a freeway, drivers cannot see what is happening ahead and will not be able to slow down or stop in time to avoid collisions.

“As we move toward the world of connected car and V2X, we will need additional capabilities beyond what cameras, lidar, radar, IR, and ultrasound can offer today,” said John Stabenow, director of product engineering at Siemens EDA. “To achieve driving safety car sensors will need to be able to perform face recognition, hand gesture detection, and more, to handle many unpredictable actions in front and many blind spots of the vehicle. Additionally, many new and compact sensors will be developed to detect things such as air leak from the tires beyond just tire pressure, as well as detection of tire tread depth remaining.”

With V2X, information from the vehicles close to the accident site will be communicated to vehicles before they reach the accident site. Those vehicles then will automatically slow down without intervention from the driver, as vehicle smart sensors communicate with one another. Most likely, this will be accomplished via 5G, with the automotive sensors behaving as IoT nodes do in edge processing.

“The network topology and architecture for a vehicle is evolving,” said Cadence’s Chua. “We are trending towards a zonal network and architecture. The interface/bus selection will depend on various factors required for the proper operation of the ECU and gateway. These factors include the amount of data load, latency requirements, reliability of data transmission, etc. A zonal ECU will use interfaces such as PCIe, MIPI, CAN, etc., to connect to sensors, or zonal endpoints. Ethernet can be used to connect between a zonal ECU and the central ECU/gateway. A V2X gateway could use 5G to connect to the cloud. As the vehicle is connected, there is a likelihood of Wi-Fi on the vehicle to provide broadband connections to devices in the cabin or around the vehicle.”

We are only seeing a glimpse of what future driving will be like. Sensor technologies can do so much for human drivers and eventually autonomous driving, and as technologies continue to develop, more features will be added to the vehicle. This is just the beginning, and as more sensors and communications are added, it will open even more opportunities well beyond the vehicle.

Related
Big Changes Ahead For Inside Auto Cabins
New electronics to monitor driver awareness, reduce road noise, ensure no babies or pets are left in hot cars.
Competing Auto Sensor Fusion Approaches
Cost, data volume, and complexity drive multiple solutions.
Changes In Sensors And DSPs
The impact of specialization and combination.
Sensor Fusion Challenges In Cars
As more pieces of the autonomous vehicle puzzle come into view, the enormity of the challenge grows.



Leave a Reply


(Note: This name will be displayed publicly)