Where Should Auto Sensor Data Be Processed?

An explosion in data and questions about how to best utilize it are slowing the rollout of autonomous vehicles.

popularity

Fully autonomous vehicles are coming, but not as quickly as the initial hype would suggest because there is a long list of technological issues that still need to be resolved.

One of the basic problems that still needs to be solved is how to process the tremendous amount of data coming from the variety of sensors in the vehicle, including cameras, radar, LiDAR and sonar. That data is the digital representation of the outside world, and it needs to be processed in real-time using various different processing scenarios. The problem is there isn’t one agreed upon way to do that, and today there is no single processor or architecture that can do it all.

“What’s the combination of CPU and GPU?” asked Vic Kulkarni, vice president and chief strategist of the Semiconductor Business Unit at ANSYS. “That is a common question from all designers at the moment because if you look at pure GPU for processing this information, it consumes too much power. As a result, one thought within the design community of our customers is to create a combination of GPU and CPU, and then dynamically assign workloads from these three data streams coming into a sensor fusion network, and then pass that information to the ECU. The industry right now doesn’t have a good recognized acceptable solution, but various design teams are definitely working on that. The workload and power management — there’s too much power consumed by ADAS processing.”

Indeed, when it comes to processing the sensor data, a number of approaches currently point to allowing for scaling between different ADAS levels, but which the best way to do that is still up for debate.

“There must be an architecture they can do that with, and the question is, ‘How do you do that?'” said Kurt Shuler, vice president of marketing at Arteris IP. “There’s a lot of interest in getting more hardware accelerators to manage the communications in software, and directly managing the memory. For this, cache coherence is growing in importance. But how do you scale a cache coherent system? This must be done in an organized way, as well as adding a whole bunch of masters and slaves to it, such as additional clusters.”

The technology is only one piece of the puzzle. The automotive ecosystem is still adjusting to autonomous development and approaches. In addition to the traditional automotive OEM, and tiered suppliers, there is a whole new level of suppliers entering into the market.

“A couple of the big Tier 1s have spun out what they’re calling Tier 0.5 suppliers [Aptiv and Magna], which are trying to produce a more comprehensive, holistic solution that they can sell directly to the OEMs,” said David Fritz, senior autonomous vehicle SoC leader at Mentor, a Siemens Business. “The OEMs put their chassis around it, and call it done. That’s the theory. The danger in that, of course, is becoming the next Foxconn of automotive, differentiating on cupholders and leather. But the other side is that you could potentially get to market quicker.”

In discussions with one of the Tier 0.5 suppliers about whether sensor fusion is the way to go or if it makes better sense to do more of the computation at the sensor itself, one CTO remarked that certain types of sensor data are better handled centrally, while other types of sensor data are better handled at the edge of the car, namely the sensor, Fritz said.

“Since then, some of the sensor companies have said they are performing a lot of the computation as part of the deliverables of their sensors,” he noted. “What that means for a camera sensor, for example, is that it needs to differentiate from all the others. So instead of passing raw camera sensor data to some centrally located unit that’s also handling LiDAR and radar and everything else, it processes that data itself because it knows the nuances of the sensor data. There may be a small Arm processor on the sensor, and because it can do a lot of this processing very low power, you don’t need a big GPU anymore. So instead of passing raw data, the sensor passes objects with pads and distances. Now suppose you are a LiDAR or radar vendor and you are doing the same thing. Then what the central processing system gets is this set of objects and labels, such that, ‘This is a person, this is a bicycle, this is a car. It’s this far away.’ Then the decision-making process is now separated from the perception process.”

That allows the engineering team to focus on the hardest part of the problem. “If LiDAR tells me there’s no yield sign, but the camera says there’s a yield sign, what do you do? If radar says there’s something in front of me, but the camera and LiDAR doesn’t see it, what do you do? Those are the hard problems,” Fritz said. “Now, with all of this data from a camera, for example, you’ve driven through all these streets, the camera sensors are collecting a lot of data, and you kept it. What would you do with the data in that arena? I’d use that data to train the sensors themselves to do a better job of what we’re calling classification and object detection. The massive data means more to sensor vendors that are taking this leap than to those that are trying to do mass computations, raw sensor fusion, and all that stuff in a centralized location.”

Passing terabytes of data through to that central location every second can bog down the network inside a car, add weight to the vehicle, and increase power consumption. “If that can be done at the sensor itself economically, that’s the direction I would like to see this go. And that’s the kind of feedback we’re getting from the OEMs, and the 0.5s are saying they’re thinking of similar things. The whole world is going in that direction. I just don’t know how quickly the transition is going to happen,” he said.

Timing is a question everyone is asking these days, and at least in some cases rollout schedules are being delayed.

“A year ago, if you talked to 10 automotive customers, they all had the same plan,” said Geoff Tate, CEO of Flex Logix. “Everyone was going straight to fully autonomous, 7nm, and they needed boatloads of inference throughput. They wanted to license IP that they would integrate into a full ADAS chip they would design themselves. They didn’t want to buy chips. That story has backpedaled big time. Now they’re probably going to buy off-the-shelf silicon, stitch it together to do what they want, and they’re going to take baby steps rather than go to Level 5 right away. So now we have people telling us our inferencing chip looks very attractive for automotive applications. They’re not as aggressive about trying to do it all themselves. Six months ago, automotive looked like an IP lay for us. Now it looks as if there is a potential for chip sales.”

Mind the gap
To put this in perspective, much of the leading edge in the industry is currently somewhere between ADAS Level 2 and Level 3. These predominantly contain cameras and radar, including RGB cameras, which are the standard cameras, along with color cameras. Inside the vehicle cabin, there are more of the monochrome cameras, with near infrared lighting, for different types of camera systems.

Then, in some high end cars and fleet trucks, there are also depth-sensing cameras, which is generally a two-camera configuration with a baseline inside a car or pointing outside on the road. Cameras have been the predominant technology today in ADAS systems, along with radar, which are the main vision system along with some of the ultrasonic sensors used in backup cameras for parking, or ultrasonic sensors that are mounted on the tail.

“In the future, what we’re seeing is the cameras going higher in megapixels,” noted Pradeep Bardia, product marketing group director for AI products in the IP Group at Cadence. “What used to be 1 megapixel cameras are now 2 megapixel cameras, and they’re moving to a higher range of cameras—4 megapixel and higher for better resolution, just as the mobile world has done for smartphones, but at a slower pace. Advanced vision applications will include surround view camera systems, which are outside cameras mounted across the top of the vehicle with anywhere from 4, 8, 12 or 16 cameras, depending on what level of car it is.”

Inside the cabin, apart from the surround view, there are driver monitoring systems in development that fall under the ADAS umbrella. “Driver monitoring will include a driver-facing camera — a single camera mounted in the instrument cluster or in the rearview mirror on the top, at a distance of 70 to 100 centimeters, Bardia said. “Also inside the cabin, some companies are going beyond the single camera and are mounting three cameras or more inside the car because they want to monitor how many people are sitting in the car. Who’s sitting in the second row? Who’s sitting in the third row? Some designs include as many as 5 cameras inside the cabin to monitor the size, weight and gender of the passengers so airbags can be deployed more safely.”

All of this adds up to a tremendous amount of data that must be processed.

There are multiple strategies for how to process this data. Bardia pointed to two main ones, a distributed ECU model, and a retrofit model. “In a distributed ECU model, the ECUs sit on the edge. Where the data is being captured, there are edge-based endpoints or ECUs. In the other model we’re seeing, standard ECUs are being retrofitted with an AI IP engine, or a bolt-on co-processor. For example, there are many ECUs that are deployed in the market that Tier 1s have been working on for many years. They have all of their home control/home code/housekeeping software, which is very important software from a safety and security viewpoint. Then they want to bolt an AI engine on top of that, or an AI add on processor to handle some of the migration to sensor fusion, as well as to AI networks.”

When doing this, the Tier 1s are being careful not to disturb the current software installed base or the system installed base, he said. “They don’t want to redesign everything from scratch. They’re trying to make sure they have an existing ECU they’ve been shipping for many years and are designing into millions of cars today. What they’re trying to do now is determine, if they migrate to AI and need 5 to 10 TOPS (tera operations per second) of performance in real-time, can they do a retrofit of an existing design inside the same ECU mechanical box form factor, keeping the same thermals, the same mechanical, as well as the electrical and power requirements. They’re being very cautious, but this is an area where I see some designs, where they’re looking at early deployments.”

On the other hand, there are some Tier 1s and OEMs designing brand new ECUs from scratch, Bardia said.

Data collection
One way to collect data is to implement a perception system, Kulkarni said.


Fig. 1: Hardware in the Loop use case. Source: ANSYS

A perception system is provided by a supplier under the form of a Mobileye physical platform running some software (the red box, upper left, in Fig. 1) and a camera. The aim is for the OEM to understand how this perception system will interact with the rest of the vehicle under multiple driving conditions. The physical camera is abstracted by a simulation model (Camera Sensor green box) covering all its optical aspects (lens, color filter, image sensor, etc.), placed at a given position into a virtual car. This virtual car is placed in a virtual environment.

The simulated sensor is placed in a realistically lightened world and can produce a synthetic camera image. This image needs to be sent to the EyeQ ECU with the same format provided by the physical camera. On the other hand, the EyeQ platform can control the physical camera and will issue commands that can modify parameters having an influence on the returned image. The role of the yellow box is to act as a proxy between the simulation IOs and EyeQ ECU IOs, which are fundamentally identical but in different formats.

On the other side, this EyeQ ECU and software will interact with other systems, typically provided by the OEM or other suppliers. The physical connection is done through the CAN bus. If these systems are physically part of the simulation, they can be directly connected through their CAN interfaces. This is not pictured in this use-case. The other systems are simulated and connected to the simulation bus (Blue box). In order to connect the CAN physical port of the EyeQ ECU, a physical adapter is needed (yellow Bus interface box). This in turn influences the global simulation through the vehicle dynamics.

Kulkarni stressed this use-case is particularly important because perception systems providers typically will not give access to their IP to the OEM, and will only provide a physical platform. However, the OEM will have to integrate multiple perception system (maybe Camera + radar + LiDAR) and understand how all this interacts with the environment and other components. Without this use-case, the only possibility is physical testing (put the systems in a real car and drive it). “With what we propose, millions of virtual miles can be driven in multiple conditions, feedback sent to the perception system provider, which can modify their system several times before this is actually tried out on a real road.”

Managing IP in autonomous vehicles
Another important aspect of autonomous vehicle development is managing the numerous pieces of hardware and software IP that comprise ADAS systems, said Amit Varde, director of product management at ClioSoft. “The engineering/realization of functional safety requirements often spans implementation fabrics including software, FPGA and ASIC. Exploring the suitability of existing IPs and the management of new or revised IPs can be daunting. Engineers need tools to explore what IP is for them to implementation candidates, as well as ways to collaborate and leverage engineering resources within their companies.”

Because automotive electronics are changing rapidly, the ability to re-use tested IP that meets safety requirements and standards is critical, and this is where IP management systems that can track all the IP data, test benches, documentation and metadata are a crucial component in the making re-usability a reality, especially in automotive electronics, Varde said.

“An ecosystem built around the IP that provides ready access to open issues, alerts users of new releases or critical information, and which leverages a community of users to get help on demand, is indispensable to design success,” he added. “These tools must manage IP asset revisions, inclusive of testing harnesses and testing results, so that engineers can easily access IP with appropriate approvals, and understand the bill-of-materials that each IP asset contains. They need to track where the IP will be used and integrate with defect-management tools. Injection of defects must encompass not only deviance from conformance and severity, but which IP are affected and where they have been used to send alerts/notifications appropriately.”

Conclusion
Fully autonomous vehicles manufactured at production levels are coming at some point, but there are big technical and logistical hurdles to solve before that happens. That includes everything from the best way to process data streams coming in from a variety of cameras, radar, LiDAR and sonar sensing systems., as well as getting the entire ecosystem in sync on sharing critical information to work bugs out of the development process and the technology itself.

Autonomous driving is still on the horizon, but it may take some time before the electronics industry can figure out just how far away that horizon really is.

—Ed Sperling contributed to this report.



Leave a Reply


(Note: This name will be displayed publicly)