Automotive Lidar Technologies Battle It Out

None has achieved economies of scale yet, but there may be more than one winner.

popularity

Lidar is likely to be added to the list of sensors that future cars will use to help with navigation and safety, but most likely it won’t be the large rotating mirror assembly on the top of vehicles. Newer solid-state radar technologies are being researched and developed, although it’s not yet clear which of these will win.

“The benefits of lidar technology are well known dating back to the DARPA Grand Challenge in 2005,” said Jon Lauckner, former General Motors CTO, in a recent presentation. “But lidar sensors haven’t really been deployed in high-volume production applications.”

Several different solid-state lidar approaches are in the works. Illuminating the target scene is important for most of them, but there are different ways to do that — micromechanical and electronic. And the different candidates all have different strengths, raising the question as to whether the automobile of the future will have a combination of lidar technologies.

The advent of lidar in automobiles follows the general trend of having multiple redundant technologies for safety purposes. Cameras and radar are already onboard, but they have their weak spots. Radar doesn’t give much specificity, and cameras have a hard time with low light or glare. Lidar offers precision in many otherwise challenging situations, so it complements the others.

There still may be a question for some as to whether having lidar in cars is a done deal. “Is there still room for lidar to play a key role, or is lidar simply a safety redundancy backup plan?” asked Ted Chua, product marketing director in the Tensilica IP group at Cadence.

But others are convinced that, at least for the most advanced vehicles, lidar will play a part. “The moment you get autonomous Level 3 and above, it’s an absolute must,” said Continental AG’s Gunnar Juergens in a presentation.

Part of the challenge is cost. Lidar currently is an expensive option, and the big push is to find something that’s more cost-effective. “The incumbent technology is complicated,” said Lauckner. “It’s bulky and very expensive, measured in the cost of several thousand dollars.”

That’s a non-starter for cars. “The OEMs we’re talking to want lidar units that are sub $500,” said Pravin Venkatesan, senior director of ASIC design at Velodyne. That presents a significant challenge.

“Driving the cost down, accommodating the advancement of computation algorithms, and the changing of environment are parts of the journey that lidar needs to go through,” said Chua.

Amr Helmy, professor of electrical and computer engineering at the University of Toronto, foresees heterogeneous packaging, combining photonic and electronic ICs based on both indium phosphide, where needed, and silicon everywhere else. “[The price] has to be in the hundreds, and the only way to do that is heterogeneous integration,” he said in a recent presentation.

The other issue is aesthetics. Current implementations involve a rotating rooftop puck, which is fine for some applications. “For the robotaxi market, people are less concerned about [aesthetics],” added Venkatesan. “If you talk about consumer vehicles, that’s where solid-state lidar becomes a new offering, where they can be integrated into the grills, into the windshields, into the sides.”

Finally, whatever the solution(s) end up being, they must be commercially robust over the long term. “There are multiple types of scanners, which you can manually go and build,” said Venkatesan. “But the reliable ones are all still mechanical scanners, which can pass the test of time.”

That said, there are others who believe only solid-state solutions can meet the rigors of the automotive environment. “What you want are components that scale well with the high-volume demands of the automotive market and are still able to meet the [automotive] vibration, shock, and temperature requirements,” said Jordan Greene, vice president of corporate development at AEYE, in a recent presentation. “Solid state is a prerequisite to achieving those things.”

All lidar is not alike, however. There are many options, including the choice of wavelength, the means of calculating distance, and the means of illuminating the target area.

Safety first
The wavelength of light used by lidar systems falls in the infrared region, making it invisible to the naked eye. It turns out, however, that just because we may not be able to see an image doesn’t mean it’s having no effect on our eyes.

Any light that focuses on our retina can do damage if it’s too intense. Our eyes are equipped to deal with this through the blink reflex. If we see light that’s too strong, then we naturally close our eyes to protect the retina.

“If someone shines a red a green laser pointer into your eye, you look away, you blink, you try to get it out,” explained Greene. “With 900, it’s invisible to the naked eye, so it could be very dangerous if misused.”

That creates an issue with the popular 905-nm laser. It’s invisible to us, although it can still be focused on the retina. Its invisibility means that the blink reflex doesn’t kick in, making this a potentially unsafe wavelength. 1550-nm lidar is also invisible, but it “… doesn’t get past that liquid in the front of your eye, so it doesn’t get focused [onto the retina],” Greene said.

Some of the following technologies may use more than one frequency, so it’s not completely cut and dried. But longer wavelengths, or lower frequencies, will keep the lidar beams in a range that’s safe for all pedestrians, as well as others who will be exposed to lots of lidar light along the roads.

1550 nm, however, requires more exotic materials, while 905-nm devices can be made in silicon — at least for the detection part, because silicon can’t be the laser source at any wavelength. That makes the 905-nm range cheaper.

Velodyne points to a number of other downsides to 1550 nm in a blog post. The company points to issues with water, rain, fog, and snow, in addition to cost. As for safety, the company doesn’t really address how it can achieve it with 905 nm, but it notes, “If sensors are designed to meet eye safety standards, both wavelengths can be used safely. All of Velodyne lidar’s sensors operate at 905 nm at Class 1 level, which is the safest level.”

Calculating the distance
The first consideration for lidar is how you measure distance. Three fundamental approaches are in play: 

  • Time of flight
  • Frequency-modulated continuous wave
  • Scanning of the scene

Time of flight
The most obvious way to detect the distance of an object (formally referred to as “ranging”) is to measure how long light takes to make a round trip from the lidar laser to the photodetector. This is referred to as time-of-flight (ToF). Modulating the lidar pulse helps to extract the reflected light from all of the other ambient light that may be received at the same time.

Direct ToF simply starts a really fast counter to detect when the return signal arrives. But those times are very short — picosecond range — and so it’s difficult at best to do this in silicon.

Fig. 1: With time-of-flight, the laser sends a modulated pulse whose reflection (or backscatter) is timed to determine distance. Requires the ability to do very fine time measurements if done directly. Indirectly, phase comparisons can be used. Source: Imec [1]
Fig. 1: With time-of-flight, the laser sends a modulated pulse whose reflection (or backscatter) is timed to determine distance. Requires the ability to do very fine time measurements if done directly. Indirectly, phase comparisons can be used. Source: Imec [1]

Instead, it’s possible to measure the phase of the return signal as compared with the original signal. That gives a distance — until you get beyond 360 degrees of phase shift. At that point, the distance is ambiguous, since you would see a similar phase shift at each multiple of the wavelength.

This approach is referred to as indirect time-of-flight (iToF), and it involves a modulated continuous wave of fixed RF frequency.

“Lidar producers don’t typically use direct time of flight because you need high-speed componentry,” said Srinath Kalluri, CEO of Oyla. “Instead, they use indirect time of flight, where you measure a quasi-continuous low-speed beam, but now you’re measuring the phase differences. In its simplest form, iToF is AMCW — an amplitude-modulated continuous wave.”

There are different ways of dealing with the ambiguity of the distance, one of which is to use multiple frequencies. That means that ambiguity happens only where the frequencies have a common multiple, and that can be controlled based on the frequencies chosen.

Frequency-modulated continuous wave (FMCW)
A variant on this idea modulates the frequency on the signal rather than the amplitude. The phase and frequency information in the reflections give distance without an ambiguous range. This is FMCW. In addition, you can get the speed of the object.

“FMCW provides velocity, so it’s good for long-range automotive applications,” said Kalluri. “You would never use it in-cabin.”

A critical aspect of FMCW, however, is that the frequencies are not in the RF range. They’re in the optical range. That makes generation and detection far more complex, requiring more exotic materials like indium phosphide, and more complicated circuitry. While it’s a promising technology, it’s currently larger and more expensive than the alternatives, meaning it would be used only where other technologies couldn’t be used.

“The complexity and cost throughout the hardware chain for FMCW is significantly greater — you need expensive materials, expensive and novel single-frequency lasers, complex coherent photonic detectors, and high-speed, high-precision electronics and processing,” cautioned Kalluri. “In return, FMCW gives you a direct measurement of radial velocity just like a Doppler radar would. It also pushes at the extreme edge of what’s possible to do reliably in semiconductor technology and manufacturing.”

Another option replaces a single scanning laser beam with multiple beams generated at different wavelengths. “There are people looking at comb-based FMCW lidar,” said Frank Smyth, co-founder and CTO of Pilot Photonics. “Instead of sweeping the laser across the range that you need, you have all of these wavelengths on simultaneously, and they have to go over a shorter sweep range. So you’re increasing your frame rate and your pixel count.”

Fig. 2: FMCW has a continuous wave with modulated frequency whose reflections can provide both distance and velocity information. Source: Imec [1]

Fig. 2: FMCW has a continuous wave with modulated frequency whose reflections can provide both distance and velocity information. Source: Imec [1]

According to Velodyne, FMCW is promising, but it’s three to four years away from being ready for commercial deployment. “FMCW meets the range [and resolution requirements], but it doesn’t meet the power or cost constraints,” said Venkatesan. And that’s only part of the story. “There is still infrastructure — a lot of custom circuits — that need to be developed to achieve scale.”

But FMCW has its believers. “Photonic integration is the critical technology for driving a lot of the cost out of it,” said Smyth. “It can be brought down to the few-hundred-dollars price point that it needs to get to in order to become ubiquitous in automotive lidar.”

iToF and FMCW appear similar in some respects, but there are some critical differences. “The key conceptual difference between the two is that, for one, the homodyne takes place at optical frequencies (FMCW — THz), while for the other it takes place at RF frequencies (iTOF — MHz),” explained Kalluri.

Fig. 3: A comparison of ToF, FMCW, and iToF (AMCW) approaches to lidar detection. Source: Oyla

Fig. 3: A comparison of ToF, FMCW, and iToF (AMCW) approaches to lidar detection. Source: Oyla

Scanning the scene
Given a way to measure distance, the next challenge is to measure not just one point, but a full field. There are several ways of doing this, and the differences are what define several of the technologies.

The most obvious approach is to use a mirror. That lets today’s rooftop lidar scan around 360 degrees. But automotive OEMs are looking to bury the lidar sensors flush with the various car surfaces so they become practically invisible, similar to what’s done today with cameras and radar.

When considering cost, it’s important to realize that with alternatives — because they’re not on the roof with full-circle views — multiples will be needed around the car. The economics work only if all of those lidar units combined cost less than the single rooftop puck.

The weight of a smaller unit also is expected to be less than that of the rooftop version, but that means the combined weight of multiple lidar units plus their connecting cables should be less than the single mechanical version.

One way of approaching this is to use a MEMS mirror. While it won’t illuminate the range that the large rotating mirror does today, it still can be actuated quickly to scan the field of interest, much the way an old cathode-ray tube beam scans a screen.

Another approach is flash lidar, which avoids the scanning and instead illuminates the entire scene with one large laser pulse, rather than directing a small one around. An array of detectors is used to measure the backscattered light and allow for distance measurement.

Because this approach involves no scanning, it’s much faster to capture a full scene. But it takes much higher laser power to get far enough with sufficient backscattered light to do accurate detection, which is a problem for a battery-operated vehicle. In addition, any direct reflectors can overwhelm the detector array, effectively blinding the system for as long as such reflectors remain in the scene.

Optical phase arrays (OPA)
Optical phase arrays use beamforming instead of a mirror to scan the scene. That removes a mechanical element that might be affected by shock and vibrations. Here, phase differences are used to scan the beam over the scene in two dimensions.

Fig. 4: Optical phase arrays provide beamforming to send the laser light in a particular direction so that the target scene can be scanned. Source: Imec [1]

Fig. 4: Optical phase arrays provide beamforming to send the laser light in a particular direction so that the target scene can be scanned. Source: Imec [1]

While OPA has been shown to work, Velodyne says it currently has poor yields, and hence the high cost. Further work will be necessary if this is to become a high-volume product.

In addition, it may not have the necessary performance. “OPA has large scattering, so it doesn’t meet the eventual resolution goal of automotive,” said Venkatesan.

One size may not fit all
While the obvious question would seem to be about which of these technologies will be the winner, there may be more than one. Forward-looking lidar must be able to handle long distances at high speed. The faster the speed, the farther ahead the lidar must be able to see.

Along the sides or rear of the car, however, the lidar won’t be looking for obstacles in the path of motion, and so they don’t need the same power. A different technology with a different balance of features may be usable there. Overall, the resolution and power must be adequate while minimizing cost, power, and weight.

Another challenge that lidar will share with cameras is keeping the apertures clean so that the laser signal(s) can go out unobstructed and be accurately detected. This is not the case with today’s backup cameras, for instance. Those represent optional additional help for backing up, and if they’re dirty, either you don’t rely on them or you stop, get out, and clean them.

That won’t work if lidar and camera technologies are critical for autonomy, so an automated way of cleaning them may be needed. “If the system is detecting something that is covering in front, it could potentially warn the driver and the driver could go clean it up,” noted Chua. “Or [it could] use a windshield wiper or something like that.”

Willard Tu, senior director, automotive at Xilinx, agreed that the default responsibility lies with the “driver” and suggested a few ways of dealing with this. “You want use air or water bursts,” he said. “Waymo has wipers systems on some of their lidars. In the end, I expect the customer will be responsible for cleaning them the same way we clean a rear camera. The system will provide a warning that the feature is disabled until the sensor is cleaned.”

It also may be helpful to put the camera(s) and lidar in somewhat different locations. “When redundancy comes into play, perhaps your autonomous driving vehicle is not totally crippled,” added Chua. “The radar sensor will always be able to sense [regardless of mud], and hopefully the camera and the lidar are not in the same place. If mud splashes on one area, it’s not going to cover all the other stuff.”

Further in the future, the lidar units may be further protected from the elements by placing them deeper inside the vehicle and using waveguides to direct the laser light out and the return light back in. “If you are able to take a physical component out of direct sunlight and direct heat, put it in the vehicle, and have guideways that bring that data to it, that’s a game changer,” said Chris Clark, solutions architect for automotive software and security at Synopsys.

Additionally, a full lidar engine involves more than just figuring out how to range and scan. Other chips are necessary to complete the system, and developing them takes time. “Coming up with our own chipsets for both the transmitter and receiver [takes] about two and a half years,” noted Venkatesan.

And then there’s validation. “It takes about a year and a half for the automotive qualification process,” he added. “The next step is getting approval from the customer, saying that, ‘Yes, this is a low-risk solution’ because of the other functional safety and ISO requirements. So you’re looking at about 2.5 to 3 years of qualification effort before it can reach production.”

Finally, while the current focus is on the hardware that implements the basic lidar capabilities, the long-term goal is software-defined lidar, where the specific lidar behaviors can be controlled by software. This can help the system to adapt to changing conditions, improving its performance overall.

Interestingly, while much research effort continues in academia and startups, OEMs may be showing signs of trying to short-circuit the Tier-1/Tier-2 approach. “There is a shift in the industry where we’re starting to see OEMs develop their own solutions,” said Clark.

Other promising lidar opportunities
While all eyes are on automobiles as the obvious source of high volume for lidar, other new applications are in developers’ sights. “The whole lidar industry is not just concentrated on automotive,” said Venkatesan. “There are a lot of other industries in terms of last-mile deliveries, drones, and automated delivery units.”

Xilinx’s Tu noted that security systems, robotics, and industrial automation also belong on that list.

Vehicles and vehicle systems predominate, including traffic management and off-road vehicles for industries like mining and construction. Robots and drones would use lidar for pretty much the same reasons as automobiles.

A recent application of interest puts them on smartphones for use in conjunction with the camera. Knowing the distance to the main focal point of the image gives processing software extra information for use in computational photography. If this takes off, it will be the highest-volume lidar application. However, it doesn’t have nearly the power or resolution that an automotive version would need.

In the surveillance and security markets, lidar could provide depth information to accompany the color information from a regular camera. Oyla does this with iToF lidar, synchronizing a lidar stream alongside the camera stream, effectively providing an RGBD feed.

Their implementation works more like a camera, where the return signals are focused on the sensor in the same way that a DSLR camera focuses light on the CMOS sensor. “Every pixel is modulated the same way,” said Kalluri. “And that illumination beam is being imaged onto our sensor array. It’s just that each pixel, instead of being only an RGB pixel, is calculating depth.”

While light coming in from nearby targets would have slight path differences straight on versus from an angle, that’s not a practical issue for longer distances. “In the far field, it’s just a wash of modulated light, and that’s good enough,” noted Kalluri.

All in all, it appears that lidar is going to need a few more years of refinement before we know for sure which approaches will be moving forward for full commercial deployment. And whichever one hits high volumes first will get the first look for any other applications.

[1] Imec images from presentation “Optical Phased Arrays for Automotive Solid State Lidar Systems”, EPIC Meeting on LiDAR Technologies for Automotive, Eindhoven, Oct. 30-31, 2019

Related
Making Lidar More Useful
Problems and potential solutions in assisted and autonomous vehicle technology.
Car Industry Changing Under The Hood
Auto electronics are becoming more centralized, connected, and complex, and the entire supply chain is realigning around those shifts.
Sensor Fusion Everywhere
Things you can learn by smashing windows with a sledge hammer.
Multistep Staircase Avalanche Photodiodes With Extremely Low Noise & Deterministic Amplification (Lidar)
Amplifying light for lidar<



1 comments

Srinath A says:

It appears that Tesla does not think lidar is a viable technology for autonomous driving. In fact, they are even abandoning radar and other sensors to focus exclusively on vision. An interesting talk from Tesla’s Andrej Karpathy here:

Leave a Reply


(Note: This name will be displayed publicly)