Radar, AI, And Increasing Autonomy Are Redefining Auto IC Designs

Adding more intelligence into vehicles is increasing reliance on some technologies that were sidelined in the past.

popularity

Increasing levels of autonomy in vehicles are fundamentally changing which technologies are chosen, how they are used and interact with each other, and how they will evolve throughout a vehicle’s lifetime.

Entire vehicle architectures are being reshaped continuously to enable the application of AI across a broad swath of functions, prompting increasing investment into technologies that were sidelined in the past, and tighter integration of perception sensors that will be needed by both human drivers and machines. So while all technologies will see improvements, particularly around safety, the mix and application of these technologies is changing.

One of the most pronounced changes involves radar — an acronym for RAdio Detection And Ranging — a technology that is both well entrenched and well understood by most engineers. In the past, radar has been pushed aside due to its lack of resolution. But increasingly it has become a poster child for what’s changing in vehicles.

Radar vs. lidar vs. cameras
Unlike cameras and lidar, which are based on optical principles, radar is based on radio wave propagation. “Radar works on a principle of continuous radio transmissions, where the frequency increases/decreases over time in a known pattern,” said Amit Kumar, product management director for the Compute Solutions Group at Cadence. “The radio waves travel until they hit an object and are reflected to a radar receiver. Compared to vision-based systems, radar has the advantage of being able to inherently detect doppler or a frequency shift. This helps the backend system calculate the object’s distance, direction and speed, which forms the basic element of several ADAS functions a vehicle will be tasked to perform.”

A classic radar processing chain involves a multi-antenna front end (multiple input, multiple output, or MIMO), which transmits radio waves and receives data such as range FFTs (Fast-Fourier Transformations), doppler FFTs and angles (azimuth and elevation), which are then extracted and processed on a processing block, such as a DSP or hardware accelerators. A 3D point cloud is then generated, which tells the vehicle where it is with respect to the surrounding environment, and identifies the various objects around it within that frame.

“As automated driving (AD) technology matures and consumers become more informed, the specific conditions in which AD functions can safely operate, also known as Operational Design Domain (ODD), will define their utility and desirability,” noted Guilherme Marshall, director of automotive go-to-market in EMEA at Arm. “For instance, in parts of the world where fog, rain, or snow are common, you want your car to include these conditions in its AD ODD. However, despite recent technological advances in image-based perception, poor visibility dramatically reduces its performance.”

Still, radar is a cost-effective way to complement cameras. In a primary compute channel, camera and radar multimodality can enhance perception performance in challenging conditions. At SAE Level 3 and above, high-definition radar (and/or lidar) also can be for redundancy.

In a few years, SAE Level 3- and Level 4-capable cars may ship with as many as nine radars, including those used for interior sensing. “To control the bill-of-materials while also paving the path for software-defined sensors, OEMs increasingly seek to ‘dumb-down’ radar sensor nodes. In future E/E architectures, radar pre-processing algorithms may be performed more centrally, in existing HPCs such as the zonal and/or AD controllers,” Marshall said.

Depending on the type of radar that is instantiated on a vehicle, the number of ADAS functions will differ. For example:

  1. Long-range radar (LRR) will enable functions like object detection at longer distances (e.g., 300m over a narrow angular region) and can help in automatic emergency braking (AEB) collision warnings and adaptive cruise control.
  2. Medium-range radar (MRR) can typically detect objects up to 150m and has a wider angular region, which helps with cross-traffic alerts or vehicles approaching crossroads.
  3. Short-range radar (SRR) has a very wide angular region but cannot see very far. It’s used for functions that are closer to a vehicle, like cyclist and pedestrian detection, rear collision warnings, lane change assist, etc.

“Frequency modulated continuous wave (FMCW) radar is most popularly used today in passenger and commercial vehicles,” Cadence’s Kumar said. “It is highly accurate and can measure both distance and velocity with higher precision, thus reducing false alarms. It can also measure both range and speed of multiple objects simultaneously. When deployed together with a vision system and lidar, a complete perception sensing suite is created. Today’s vehicle architectures adopt multiple sensing redundancies, which enable a vehicle to operate safely in partial autonomous or fully autonomous modes.”

Radar is a key aspect in ADAS applications, which are growing both in popularity and mandate, even though the mandates don’t indicate how ADAS is to be implemented. Those could include all camera-based systems, or a mixture of cameras and radar. “Until the application can be safely implemented with just one system, either a camera-based system or a radar-based system, the industry projection is that both types of sensors will continue to move forward,” said Ron DiGiuseppe, automotive IP segment manager at Synopsys. “If we look at some of the forecasts, the volume of radars being shipped in cars continues to grow. In my opinion, both cameras and radars will be used by the OEMs, consistently moving forward, so there’s no diminishing of the use of radar.”

The number of radars in vehicles varies, depending on the OEM and the application. For certain self-driving and automated-driving implementations, Cruise includes more than 10 radars, whereas Google Waymo has six. “The number changes based on the car, but six is a standard number — one forward-looking, one reverse-looking, and then short-range radar for blind spot recognition,” DiGiuseppe said. “That number changes if you want to have interior, digital cockpit occupant monitoring. You could have two more in the interior, so it changes based on the number of applications.”

Better radar
Scalable radar schemes also are evolving alongside of traditional ones. “You may have, for example, medium-range radars at the corner of the vehicle, then at the front of the vehicle, you would most likely have long-range radars, since the objective is to have a very good prediction of what’s coming,” according to Adiel Bahrouch, director of business development for silicon IP at Rambus. “The same is true for rear radar, and at the side you will probably have medium-range or short-range radar. This is all radar, but you see different technologies, as the purpose and objective of that radar is slightly different. I also have seen some solutions where developers are trying to provide a configurable platform, meaning by changing the configuration you will also change, let’s say, the range of the radar to support certain purposes or objectives, which means scalable solutions. You have a baseline, then you have all the options that you can choose from within a single package or single chipset, which OEMs really like because then the OEM will have one single point of contact, one solution, one platform that is configurable. Then, based on the purpose, you would serve certain applications.”

Radar is a mature technology, but within automotive and especially for autonomous driving, radar itself is not sufficient, so there is a need for developments and an effort to increase the performance of the radar. “When we compare cameras and radar, for example, a camera has a very rich content in terms of information, with very sharp resolution,” Bahrouch explained. “It can distinguish colors, shapes, and so forth. Radar has some unique characteristics, but from a resolution and quality point of view, it’s very poor. However, a camera has a lot of good characteristics, and it’s also cheap. OEMs like that, and it does a decent job, but under good light and weather conditions. When it comes to extreme weather or light conditions, a camera doesn’t do a good job, and that’s exactly where radar really does well. Radar is not sensitive to light conditions or weather conditions. It’s solid. The technology is well known. But then again, when it comes to the resolution, it doesn’t do very well. For example, when we talk about 3D radar, which is the conventional radar used, 3D stands for the radars that can detect distance, angle/ direction, and velocity. Radar is able to provide this information with one single measurement.”

Radar’s benefits stem from the fact that it senses electromagnetic waves and measures the reflection of those waves based on the speed and the delays. And because it’s electromagnetic, it is not sensitive to light or fog or rain. But because the information is poor compared to camera or lidar, the industry has put a lot of effort in developing next generation 4D radar, which allows height to be measured, as well. “With 3D radar, you are not able to detect whether it’s a large vehicle, small vehicle, whether it’s a building, not a building,” Bahrouch noted. “With the camera you can get that information, as well, which is why 4D radar was introduced. Also known as imaging radar, it improves the performance of conventional radar in terms of quality, in terms of resolution, but also the information, since now you can also detect or distinguish a building from a car, from a tunnel, from pedestrian, and so forth.”

Integration with other technologies
How radar will hold up over time with increasing intelligence in vehicles remains to be seen, particularly with an increasing number of AI-driven advanced features making their way into automotive architectures. But it serves as proxy for how much change is still to come in vehicle.

“If you’re only trying to do ADAS functionality, there’s no question radar has a role, and it has for a long time,” said David Fritz, vice president of Hybrid and Virtual Systems at Siemens Digital Industries Software. “A human cannot perceive something under certain conditions, but radar can, and that could be useful. It’s great for emergency braking, cars in your blind spot, things like that, so there’s no question that radar has a use. But as we move beyond ADAS into autonomy, the sensor array complexity is inversely proportional to the intelligence on the vehicle. If you have one of these new Focused Transformer (FoT) AI solutions, which we’re seeing pop up now, they will deduce, ‘I see snow. It’s foggy. I should probably slow down.’ But in ADAS, you still have the driver in the loop. The driver may or may not be skilled driving in the snow, and they may think just because it’s snowing they can still go 45 miles an hour because that’s the speed limit. Having extra sensing to make up that difference in experience is a good thing.”

When it comes to making vehicles more intelligent, the trend is that the sensor array complexity and cost goes down proportionally to the intelligence going up. “This is because you’re competing against a human driver,” Fritz noted. “We don’t have radar. We have experience that takes that role and if we can augment our vision, put on glasses or whatever, then we have smarter cameras. Also, there are things happening with other sensing modalities like cameras to prevent them from not working in adverse weather conditions. There are lots of things happening there, and cameras themselves are probably the most cost-effective approach. We’re seeing 12 to 18 cameras on vehicles now, and if we look at things from multiple perspectives, and have the intelligence to process that information, it seems, in general, that the need for radar seems to be going down.”

Rambus’ Bahrouch also expects to see AI technology in ADAS used to enhance camera or radar information. “Let’s say the camera is able to recognize patterns and speed labels and those kind of things, but in order to recognize them you need to train the system to be able to recognize those. That training part is where you find AI technologies to make that happen. For radars, there is some research around how to train that data, to learn from whatever history is in place, whatever information is in place, and predict what’s coming. It’s the same for camera pattern recognition, and a lot of training is happening there. Then, when we combine those different technologies, there will be also a lot of training, and AI will be playing a role.”

Looking ahead to Level 4 and Level 5 vehicle autonomy, AI will be a key technology enabler, but currently, development is heading to the extreme, even if it won’t stay that way. “We’re seeing systems with 64 and 128 CPU cores,” Fritz observed. “That is inversely proportional to the intelligence. In other words, if you have a lot of heuristics, a lot of algorithms that are all running and checking each other, then you might need a lot of CPU cores to take in that knowledge, no matter what the sensor array provides in order to make an intelligent decision. But with a lot of the new AI techniques, that’s not an effective approach. You might need maybe four or eight CPU cores, but you have other accelerators that are fine-tuned to process that information more effectively and then feed that data into the CPU complex so that it can make the intelligent decisions it needs to make. The AI model is going to change the number of cores, and it’s going to push what intelligence is necessary out to the edge of the car, closer to the sensors. We’re seeing this happening everywhere. What’s nice about that solution is that if you get into a fender bender, it’s not a $20,000 problem. You’re replacing a camera that costs $1.50, so it’s much more cost effective long-term for the vehicle and the maintenance and fleet survival.”

Connecting everything together, but differently
Simply put, this means applying AI locally, starting with the sensors. Radar, cameras, and lidar increasingly will be connected into other systems, all of which will need to be adjusted depending upon whether vehicles offer driver assistance, limited self-driving capabilities, or full autonomous driving. Touch screens, for example, need to be managed very differently at each of those levels.

“Right now, everyone thinks AI is all about asking a copilot what to do,” said Satish Ganesan, senior vice president and general manager of Synaptics’ Intelligent Sensing Division. “It’s more than that. We want to use distributed AI to enhance the usability of stuff and make things better for the end user. So we have processors that primarily do AI for different applications. In automotive, for instance, we do things like local dimming in the glass. But we also have developed technology where, if a driver is touching a passenger screen, it doesn’t react. It enhances security by saying, ‘Oh no, you can’t reach out and do stuff like that.’ If there is ice on the road, it will not let you play with anything. We detect the angle of the touch and the sensor in the seat and say, ‘Hey, it’s not coupling with the right person. We’re not going to let you touch that screen.'”

As more autonomy is added into vehicles, those interactions could change significantly. “There are two paths,” said Peter Schaefer, executive vice president and chief strategy officer at Infineon Technologies. “The first path is a continuous improvement and continuous offer of automated drive functionalities to the customer. “The second is fully autonomous vehicles. We have to ready the car architecture to be a software-defined vehicle, meaning we can bring software and services into the vehicle and the consumer throughout the lifetime of the car. As a consumer, we want upgrades after we buy a product. We also want a robust car with all this innovation coming in. We don’t want it to stall or not work anymore. It needs to be functional all along.”

—Ed Sperling contributed to this report.

Related Reading
The Uncertainty Of Certifying AI For Automotive
Making sure systems work as expected, both individually and together, remains challenging. In many cases, standards are vague or don’t apply to the latest technology.
Fundamental Issues In Computer Vision Still Unresolved
Industry and academia are addressing next steps.
Future-Proofing Automotive V2X
Questions remain about how far out the industry can reasonably plan due to the pace of change in technology and standards.



Leave a Reply


(Note: This name will be displayed publicly)