Radar Versus LiDAR

Racing to drive down sensor costs for the automotive market.

popularity

Demand is picking up for vision, radar and LiDAR sensors that enable assisted and autonomous driving capabilities in cars, but carmakers are now pushing for some new and demanding requirements from suppliers.

The automotive market always has been tough on suppliers. OEMs want smaller, faster and cheaper devices at the same or improved safety levels for both advanced driver-assistance systems (ADAS) and autonomous driving technology. Generally, ADAS involves various safety features in a car, such as automatic emergency braking, lane detection and rear object warning.

Radar, an object-detection technology used for blind-spot detection and other safety features in vehicles, is a case in point. “Over time, the radar modules have shrunk considerably. The thermal requirements are getting harder,” said Thomas Wilson, product line manager at NXP. “So, the performance requirements are going up. The size is going down. And the cost requirements are getting more and more aggressive.”

The radar modules used in cars today are clunky systems that incorporate several chips based on different processes. But seeking to reduce the size and cost, Infineon, NXP, Renesas and TI are moving towards integrated radar chipsets that combine various components on the same device.

Radar chipsets are targeted for some applications, but they point to an emerging trend. Instead of using different processes for various chips, IC makers are integrating radar devices using standard CMOS processes at 45nm and 28nm. Other process options include 22nm bulk and FD-SOI.

Another technology, LiDAR, is moving from bulky systems toward smaller solid-state units with more integrated components in an effort to bring down the high costs for the technology. LiDAR (light imaging, detection, and ranging) uses pulsed laser light to measure distances.

There are other dynamics at play. For example, the industry is developing next-generation radar with higher resolutions, a move aimed to displace LiDAR. But LiDAR technology is not standing still.

As it turns out, there is no one technology that covers all ADAS/autonomous requirements. Today, some vehicles incorporate advanced vision systems and radar. Over time, they may also include LiDAR, meaning the various technologies will co-exist.

Each technology has its pros and cons. “LiDAR is a more expensive system compared to radar, but it is more accurate in identifying an object. LiDAR has its limitations in adverse weather conditions, such as snow, rain and fog,” said Jim Feldhan, president of Semico Research. “While radar doesn’t seem to be as affected by weather conditions, it can’t determine the size and shape of objects as accurately as LiDAR.”

To help OEMs get ahead of the curve, Semiconductor Engineering has taken a look at trends in advanced vision, radar and LiDAR and ways that vendors are attempting to bring down the costs.

Safer cars
Carmakers are embracing these safety-oriented technologies, and for good reason. In total, 94% of serious crashes are attributed to driver error, according to the U.S. National Highway Traffic Safety Administration.

So over the years, the automotive industry has incorporated more safety features in vehicles. For this, the industry is following two parallel paths—the New Car Assessment Program (NCAP) and autonomous technologies, according to NXP’s Wilson.

Over the years, Asia, Europe and the U.S. have all rolled out NCAP guidelines. In the program, cars are tested and given star ratings based on the safety of the vehicle. A five-star rating is the highest level, while one-star is the lowest.

“That five-star rating has a huge impact in car sales. More consumers are incentivized to buy cars with five-star ratings because the insurance premiums are lower. Also, they are just safer,” Wilson said.

Each continent has its own NCAP criteria. But in simple terms, the basic NCAP criteria involve several ADAS technologies, such as adaptive cruise control, automatic emergency braking, junction assist, lane-keep assist, among others, according to NXP.


Fig. 1: NCAP criteria for five-star rating. Source: NXP

In adaptive cruise control, a car automatically adjusts its speed in traffic. Automatic emergency braking is when the vehicle applies the brakes automatically if it senses a collision.

In lane-keep assist, the car will automatically prevent the driver from making an unsafe lane change. Then, in junction assist, a driver wants to make a turn. If it’s unsafe, the car will automatically brake.

The NCAP roadmap is fueling the need for more sensors in the car. “For instance, automatic emergency braking will utilize both camera and radar,” said Mark Granger, vice present of automotive at GlobalFoundries. “That technology is starting to migrate from the ultra-luxury into the mid-tier models.”

On top of that, carmakers also are following a parallel path involving autonomous driving technology, a move that is driving demand for cameras, LiDAR and radar. Full self-driving technology may not reach the mainstream for a decade or longer, however.


Fig. 2: Autonomous driving architecture. Source: NXP

ADAS and autonomous technologies are only part of the equation. They must work seamlessly and without fail with the other systems in the car. “The technology boils down to two basic components: 1) connectivity of the vehicle to the Internet and 2) the vehicle’s ability to sense and interact with its surrounding environment,” said Steven Liu, vice president of marketing at UMC.

“For example, the rising adoption of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2X) means a continued increase in the number of vehicular radar systems,” Liu said. “Technologies needed for these systems include the car’s anti-collision radars and global positioning systems, as well as sensors that will be needed to interact with stoplights and vehicle dispatchers. These will work in conjunction with existing systems, such as passenger comfort and infotainment control, and engine monitoring subsystems that regulate temperature, tire pressure, and gas.”

There is another critical aspect—safety. “Many of the semiconductors are now part of ADAS that are critical to the function and safety of the vehicle, where failures cannot be tolerated,” said Robert Cappel, senior director of marketing at KLA-Tencor. “The rapid push to autonomous driving features and eventually autonomous driving further drives the needs for all the semiconductor chips to work perfectly together to protect the safety of both the car’s occupants and others in the surrounding environment. This is the biggest driver of a requirement for zero defects in parts per billion within the industry today.”

More than vision
Carmakers, meanwhile, are taking various approaches to ADAS. For example, Tesla’s vehicles incorporate eight cameras, twelve ultrasonic sensors and radar. Ultrasonic sensors measure the distance to an object via sound waves.

Tesla doesn’t currently use LiDAR because the technology is too expensive. In contrast, others may incorporate cameras, radar and LiDAR.

In either case, advanced vision is a key part of the equation. “In recent years, camera-based sensors have started to perform wide ranging tasks, such as road sign detection, lane departure warning, light beam control, parking assist and even driver monitoring,” said Avi Strum, senior vice president and general manager of the CMOS Image Sensor Business unit at TowerJazz.

But cameras can’t provide all safety functions. “Indeed, companies such as Mobileye believe that camera-based sensors alone can provide the complete ADAS solution,” Strum said. “While camera-based sensors provide exceptional lateral resolution of the surroundings in well-lit or medium-lit conditions, their performance deteriorates rapidly in harsher conditions, such as in the dark, rain, fog or snow.”

What this means is that camera sensors must improve, namely in the areas of dynamic range and near IR sensitivity, he added.

Making waves with radar
Radar is also part of the ADAS mix. In simple terms, radar transmits electromagnetic waves in the millimeter range. The wave signals bounce off objects and are then reflected back. The radar system then captures signals to discern the range, velocity and angle of an object.

In some cars, OEMs use short- and long-range radar. Adaptive cruise control and automatic emergency braking use long-range radar (LRR). For LRR, the radar module is located in the front center of the car behind the bumper.


Fig. 3: Radar for autonomous cars. Source: NXP

Front-facing LRR operates at millimeter-wave frequencies at 77 GHz at a range from 160-200 meters. Typically, the front-facing radar module incorporates several different components, such as a microcontroller (MCU) and an RF transceiver. In operation, the transceiver sends radar data to the MCU over a link.

MCUs, which handle the processing, are based on CMOS and are migrating from 55nm or 40nm to 28nm and beyond. Meanwhile, the standalone transceiver is typically based on a different and high-performance RF process called silicon-germanium (SiGe). Some use BiCMOS. Both technologies come in more mature nodes.

Front-facing LRR will remain at 77 GHz, but the technology is beginning to change. As part of a recent product rollout, for example, Texas Instruments introduced a single-chip radar product that combines the MCU and transceiver on the same device. The radar chipset is based on a 45nm RF CMOS process, enabling the integration of different components.

“A single chip radar sensor solution has many advantages over a two-chip solution,” said Kishore Ramaiah, product manager at TI. “Because it is an RF CMOS solution, it features lower power and higher integration, which in turn means a smaller size and an optimized BOM.”

Front-facing LRR-based radar, however, will likely stick with the discrete solutions, at least for now. “There is quite a bit of variation in the design of those LRR modules that makes me think the MCUs and transceivers will be discrete for some time,” NXP’s Wilson said. “Over time, as RF CMOS technology advances and is able to support the RF performance requirements of LRR, then it will become more competitive with SiGe. I expect them to co-exist for some time.”

The real design action is taking place in the short-range radar (SRR) module, which today operates at 24 GHz with distances from 60 to 70 meters.

Located in the back corner of the car, the SRR module is use for lane detection, lane keep and related functions. Meanwhile, junction assist also uses SRR, where the module is located in the left front bumper.

As vehicles move toward more advanced ADAS functions, SRR radar is evolving from 24 GHz to the higher performance 79 GHz band. The 79 GHz spec is used to avoid interference with 77 GHz in the front.

The back-corner radar module also is moving from the discrete to the chipset-oriented solutions. “There is market interest in RF CMOS in the integration of the transceiver with MCUs. This is not so much for the long-range forward-facing radar, but more for the corner radar, where they need to be smaller and it’s more cost sensitive,” Wilson said.

For this module, OEMs have several options. As stated above, TI offers chipsets using 45nm RF CMOS. Then, in another approach, ADI and Renesas are developing a 77/79 GHz radar device using a 28nm RF CMOS process.

Meanwhile, in another option, GlobalFoundries is offering 22nm FD-SOI, a technology that incorporates a thin insulating layer in the substrate to suppress leakage.

Like bulk CMOS, FD-SOI enables chipmakers to integrate various components, including radar chips. FD-SOI solves another problem, as well. “The power consumption of one radar unit is already quite high. And if you bring in many more radar devices, thermal issues become a big problem,” said Bert Fransis, senior director of product line management at GlobalFoundries. “Bulk CMOS has not been able to deal with that. With FD-SOI, you can bring the power consumption per radar solution down to below 1 watt.”

Then, of course, there is the traditional option by using standalone SiGe-based transceivers. “SiGe-based short-range and long-range radars address the requirements for automotive radars,” said Amol Kalburge, senior director of strategic marketing at TowerJazz. “The most important parameter for using SiGe is the full integration of power amplifiers for the transmitter side and low-noise amplifiers for the receiving side on the same chip, allowing the best performance and lower cost.”

As mentioned before, radar has some pluses and minuses. Radar is good in detecting objects at high resolutions. But radar can’t discern that object. For example, it can’t discern if the object is a person or a dog. Radar requires cameras, which help understand the surroundings. So, there is a need for fast graphics processing and deep learning.

“For example, radar is far superior in poor weather conditions (rain, fog, snow), while LiDAR allows a detailed 3D scan of surroundings to detect and classify stationary and moving objects,” Kalburge said. “Radar sensors are fairly compact and cost-efficient, so they are already adopted aggressively by most OEMs. Resolution of today’s automotive radar solutions is sometimes not adequate for fully autonomous applications, but there are new hardware and software solutions being developed to improve the resolution.”

Indeed, the industry is working on next-generation radar. One goal is to close the resolution gap with LiDAR or even displace LiDAR. “In the future, you are going to see a race between LiDAR and radar,” GlobalFoundries’ Granger said.

In R&D, Imec is developing 140 GHz radar technology. Others are working on imaging radar. “Radar is undergoing continuous improvement,” said Marcus Monroe, a technical marketing specialist at National Instruments. “New antenna designs and advanced processing algorithms are giving radar new capabilities, which allow it to be used in areas where it was not previously used, such as pedestrian detection.”

So what is imaging radar? “Imaging radar is an application of radar to generate 2D or 3D images using the radar’s reflective energy by rapidly generating radar pulses. This has been used for years in the aerospace industry for applications, such as land mapping and weather. It has not been used for automotive radar, possibly due to power and processing constraints,” Monroe said.


Fig. 4: High-resolution radar vs. LiDAR. Source: NXP

What is LiDAR?
LiDAR continues to make progress. “LiDAR is also undergoing its own evolution in cost reduction, movement to solid-state and new continuous waveform versions,” Monroe said.

LiDAR is associated with autonomous driving, but it isn’t limited to those apps. “You are finding LiDAR being applied to ADAS vehicles in conjunction with cameras and radar,” said Anand Gopalan, chief technology officer at Velodyne, a LiDAR supplier.

This technology is different than radar. “In LiDAR, you are sending out a series of light pulses and measuring the return time-of-flight,” Gopalan said. “You are creating three-dimensional, high-resolution maps of the world around you.”

LiDAR has various technical challenges. It is also expensive with several moving parts, but that’s beginning to change. Generally, cameras sell for $30, while LiDAR is $3,000, according to Yole Développement. But some LiDAR systems are moving toward the $300 price point and below, according to Yole.

There are three approaches to LiDAR—mechanical, , and hybrid solid-state, according to Frost and Sullivan. Mechanical LiDAR is used in high-end industrial markets, while MEMS-based solutions are emerging, according to the firm.

Then, a slew of companies are working on smaller and more compact solid-state LiDAR systems. Solid-state LiDAR has few, if any, moving parts.


Fig. 5: Velodyne’s LiDAR systems

LiDAR makes use of several key components, namely a laser diode, a photodetector and processing elements.

Using a laser diode, a light pulse is emitted in the form of a laser at 905nm wavelengths. The laser fires a million photons, but the problem is that only one photon returns to the system, according to Frost and Sullivan.

So LiDAR makers incorporate several lasers, sometimes up to 64, to increase the number of photons. Lasers that send a billion photons generate 1,000 returning photons, according to the firm. Once the photons are emitted, they bounce off objects. Then, the photodetector senses and captures the returning signals in a system with a narrow or a 360 degree field of view. Sunlight and weather conditions could impact the signal-to-noise ratio in the detector.

After that, the distance for a given object is calculated. The data is then processed. “You have complex signal processing, which happens in a much denser processing element like an FPGA or a processor,” Velodyne’s Gopalan said.

Over time, the goal is to integrate more functions to reduce cost. “We are using our own ASIC technology to combine significant chunks of the functionality into a set of ASICs. This allows for tighter integration, smaller form factors and also cost reductions,” Gopalan said. “We don’t believe that any time in the near future you will have a single piece of silicon that has the entire LiDAR on it, but it will come very close to that. You will see a highly-integrated, multi-chip module, which will have a fully functional LiDAR.”

Other components are also moving towards more integration in LiDAR, such as the laser diode driver. This part provides current to the laser diode.

Velodyne, for example, uses laser diode driver chips based on gallium-nitride (GaN) technology from Efficient Power Conversion (EPC). The switching speed of GaN is 100 times faster than silicon, according to Alex Lidow, chief executive of EPC.

GaN is used for power generation, laser fire and control. “Due to GaN’s fast switching speed, high voltage, and high current capability, the photon packets sent by the laser can be shorter with more photons packed in each pulse,” Lidow said. “The LiDAR system can therefore see further and with greater resolution, while creating a faster bit-map of the surroundings.”

There are a number of methods to drive laser diodes, he said, but there are two primary types–capacitive discharge (CD) and FET controlled. GaN-based laser diode drivers represent less than 5% of the overall cost of a LiDAR system. “GaN transistors are already being replaced by GaN integrated circuits that lower system cost and improve performance,” he said. “As the system costs come down, GaN costs will come down in proportion.”

Manufactured on a foundry basis by Episil, EPC’s GaN-based laser diode drivers are based on a 350nm process. It plans to migrate to 130nm over time.

Another key component is the photodiode, a semiconductor device that converts light into an electrical current. For LiDAR, there are four main photodiode types—pin photodiodes, avalanche photodiodes (APD), single-photon avalanche diodes (SPADs), and silicon photomultipliers (SiPMs).

“The choice of detector technology may vary depending upon the choice of wavelength,” Gopalan said. “In the 905nm region, silicon APDs continue to be the most reliable and proven technology in terms of gain and the ability to provide an optimal signal-to-noise ratio. SiPMs continue to show promise, but their SNR advantage at a LIDAR system level remains to be firmly proven out.”

So how will this all play out in the future? Cameras, LiDAR and radar will likely co-exist. “We don’t think you will see an either or situation,” he said. “There is room for radar continuing to supplement and augment information that LiDAR provides. And you will continue to see cameras as the third sensor modality.”

Related Stories
Foundries Accelerate Auto Efforts
LiDAR Completes Sensing Triumvirate
Electric Vehicles Set The Pace
Rethinking Car Design
Connecting The Car
Automotive’s Unsung Technology
Autonomous Cars Drive New Software
Self-Driving Cars Rattle Supply Chain
The LiDAR Gold Rush
Rethinking Verification For Cars
Test More Complex For Cars, IoT



6 comments

gtre derivatives says:

Solid state LIDAR’s will eventually replace even the MEMS based LIDAR’s and most autonomous cars (research domain) being worked on today (consumer grade) are planning on using only a front facing 120 degree solid state LIDAR along with a wide angle front & back facing camera and a Satellite based NAV system coupled with some form of inertial navigation. The discussion about LIDAR’s not being able to see through snow, rain & fog doesn’t hold much ground, because a LIDAR’s resolution isn’t supposed to be better than a human driving a car. All that will happen in the event of snow, rain or fog is that the point cloud delivered by a 120 degree front facing solid state LIDAR will be less “dense”, similar to how humans perceive a scene while driving through similar scenarios. However even then its resolution will be much higher than that of a radar. Also Solid State LIDAR’s (the ones that use purely phase shifting to drive the light beam & back end digital signal processing, similar to AESA radar technology) will in-time collect enough points to form 3D point clouds that will be able to detect and segment various objects on the Street even when driving through snow, rain or fog.

Simon Prutton says:

Nice article Mark. This is the most balanced piece on this subject that I have seen in a while. It will be interesting to see how it plays out.

Se says:

Great article, especially like the LiDAR RADAR imaging comparison.

Se says:

Hi Mark,
Do you have any good lead for imaging radar?

Mark LaPedus says:

Hi Se. Maybe start with NI at this address: [email protected]

Kishore says:

very nice article , very helpful for initial understanding on autonomous driving internals

Leave a Reply


(Note: This name will be displayed publicly)