Why Geofencing Will Enable L5

Fully self-driving cars will require AI that can learn as they drive.

popularity

What will it take for a car to be able to drive itself anywhere a human can? Ask autonomous vehicle experts this question and the answer invariably includes a discussion of geofencing.

In the broadest sense, geofencing is simply a virtual boundary around a physical area. In the world of self-driving cars, it describes a crucial subset of the operational design domain — the geographic region where the vehicle is functional. Reaching full Level 5 autonomy means removing the “fence” from geofenced autonomous cars. Experts say that will require artificial intelligence that can make abstractions, inferences, and become smarter as it is being used.

L5 cars do not yet exist, and they will require a major leap in technology compared with today’s high-tech vehicles. Currently, the most advanced self-driving cars operate at Level 4 autonomy, including the robo-taxis in San Francisco, Phoenix, Shenzhen, and other cities. Those vehicles have a limited operational domain and can only be fully autonomous under certain conditions, such as when the car is located within the boundaries of its geofence. By SAE definition, L5 vehicles are capable of “operating the vehicle on-road anywhere that a typically skilled human driver can reasonably operate a conventional vehicle.” An L5 vehicle that has a geofence for a business reason— for example, because the manufacturer has the authority to sell autonomous cars in one country but not another nation nearby— is still an L5 vehicle under the definition.

Differentiating between a L4 and L5 vehicle is straightforward. However, turning the former vehicle into the latter is a complex technological challenge. “The limits provided by geofencing have a profound effect on the capabilities that an autonomous vehicle requires, which impacts the hardware required to power the autonomous systems,” said Robert Day, director of automotive partnerships for Arm’s Automotive and IoT Line of Business. “Limiting the area to more ‘private’ locations, or even sidewalks versus roads, can significantly reduce the type of interactions the vehicle will have with other objects, such as cars, trucks, cyclists and pedestrians.”

Day said a good example of this on a smaller scale is the Kiwibot delivery vehicle, which is designed to make deliveries via sidewalks. “By reducing the speed and geographic area, the bots require fewer sensor systems to identify obstacles than the more complex road-borne vehicles, and hence require less computing power to process the sensor data and make the appropriate decisions.”


Fig. 1: A rendering of a car with advanced navigational software. Source: Arm

At the other end of the spectrum are robo-taxis, which require highly complex systems even when geofenced. “The sensor suite becomes very elaborate, with multiple sensor modalities such as radar, lidar and high-definition cameras placed around the vehicle,” Day noted. “Processing huge amounts of sensor data requires a very high-performance heterogenous processing system, more akin to a data center. Also, being able to make driving decisions and actions in real-time requires very complex software algorithms.”

Those very complex algorithms and their associated hardware are critical to an L5 vehicle breaking through the geofence. “An L5 must essentially be able to not only sense its environment, but derive meaning and make inferences about that environment when it approaches something with which it is unfamiliar,” said David Fritz, vice president of hybrid-physical and virtual systems for automotive and mil/aero at Siemens Digital Industries Software. “In other words, it will need to be able to learn on the job, because there is no way to train an AI to handle every possible situation the vehicle might experience.”

To this point, Fritz described an event several years ago in which an autonomous system undergoing testing “saw” something it had never before encountered — a multi-level parking garage. “The vehicle thought the cars parked in the parking garage were blocking the road. It thought, ‘Car stopped, go around the curb.’ In a geofenced area, you drive the car around and collect the data so the vehicle can say, ‘If the car is 10 feet off the ground, ignore it. It’s not going to hurt you.’ When a system encounters something and doesn’t know what to do about it, in many cases the car simply stops moving.”

AI algorithms today are literal. “Once you have a literal interpretation of something, it’s hard to back out and abstract,” Fritz said. “There are lots of examples in Australia where the AI was trained for dogs, cats, cows, and deer, and when the car saw a kangaroo it didn’t know what to do and the car just drove over it. A human would never do that. Even if the human has never seen a kangaroo before, we would know it was an animal.”

Developing AI that can make those sorts of inferences requires it to think more like a human would under the same circumstances. “Right now, when we’re recognizing objects, we’re recognizing them essentially by their outline and then projecting a three-dimensional rotation,” Fritz continued. “We say, ‘An animal is something that moves. This doesn’t appear to be blown by the wind, it’s moving under its own locomotion. It’s an animal, and animals deserve certain respect.’ That’s what will need to go into the algorithm, and we’re not doing that yet. Humans do this without even thinking from the time that we’re born.”

Research is underway, and there are AI projects trying to figure out exactly how to process this sort of activity. “The basics are understood, but the level of complexity needed for L5 is not,” he said. “Inferencing is using the network to make decisions based on the inputs, but the machine learning for a car is the actual teaching process that is a lot more onerous Right now we use supercomputers to do that, or a whole bunch of PCs on a PC farm to help crunch the numbers. The result of all that processing and training then goes into the neural network implementation that does the inputs. We have a good handle on inferencing and doing that cost effectively and doing it for low power. It’s the machine learning part where all of a sudden it increases by a couple orders of magnitude. If we need in-car learning, we need to solve the teaching-learning process to the same level we’ve done for inferencing. And that’s a ways away.”

When AI does advance in that way, there will be a number of ethical decisions at play. If a car is going to think like a human, which human should it think like? Should the decisions the car makes mirror the values of the human who owns it?

“There are laws in certain countries that resolve some of the philosophical issues, but of course it’s not exhaustive,” said Fritz. “The law may say it’s okay to run over the kangaroo, but that might not be the decision of the driver.” He imagines a scenario in which a newly-purchased autonomous car puts its owner through a number of simulations to learn from the owner’s preferences regarding, say, re-routing a car around a squirrel.

Fritz predicts the industry is several decades away from that type of technology. In-car learning likely will take place within the vehicle and not the cloud because of latency and connection issues. “Quantum computing would be a huge step forward, and being able to do so in a cost-effective manner will be necessary for the automotive industry to create ‘simulated neurons’ and AI trained with massive data sets. Until that happens, I don’t think we’ll ever say that a car can drive anywhere it hasn’t been and get there better than a human could.”

Even after the industry achieves L5 and addresses the thorny ethical issues that come with a hyper-advanced AI, auto OEMs and other parties may still make use of geofencing.

Paul Graykowski, senior technical marketing manager at Arteris IP, describes geofencing as a sort of “training wheels” for more advanced autonomous technology, and to some extent for the drivers themselves.

Just like training wheels protect a rider from some amount of danger, Graykowski imagines a world in which a L5 car uses geofences to prevent the human from routing the car through construction, flooded streets, or icy roads. “That type of geofence will require not only plenty of compute power in the car or in the cloud via a strong network connection, but smart city infrastructure communicating with the vehicle.”

Additionally, governments could require auto OEMs to geofence L5 cars so the vehicles only drive through designated areas, Graykowski said, particularly as communities grapple with what it means to have both autonomous and non-autonomous cars on the road together.

“Geofencing will probably exist in some capacity for the foreseeable future,” Graykowski said.

Conclusion
Fully autonomous vehicles that can drive anywhere the driver directs it to without the need to put their hands on the steering wheel are still a ways off. But the number of corner cases goes down significantly in highly controlled environments, such as a geofenced lane on a highway or on a limited access road. It’s there that L5 will prove its value, and as more cars and roadways are enabled, this could have a significant impact on how we view autonomous technology.

But to be sure, this is still all in the future, and it will take time to work out all the possible bugs.

Related Reading:
L5 Adoption Hinges On 5G/6G
Wireless technology is key to both inventing and selling fully self-driving cars.
Where Are The Autonomous Cars?
L5 vehicles need at least 10 more years of development.
Using 5nm Chips And Advanced Packages In Cars
Experts at the Table: Challenges and some potential solutions in ADAS and autonomous vehicle electronics.



1 comments

Laur Rizzatti says:

L5 will not be rolled out extensively if it is geofenced. The argument that you can design a solution running at lower speeds requiring fewer sensors to reach autonomy is theoretically true, But why should you try to design a suboptimal solution for cars? Why not focus on driving cost down on the sensors and design faster compute elements through which to achieve appropriate coverage. While focus on AI is necessary it is not sufficient. AI must be coupled with signal processing to deal with the real world (Particle Filter being point in case).

Leave a Reply


(Note: This name will be displayed publicly)