AI is rewiring the auto industry through a combination of virtual development and testing.
Artificial intelligence is turbocharging automotive innovation, but it’s also unleashing a tangle of high stakes risks that engineers and security experts are scrambling to contain.
The push to embed AI deep into today’s vehicles is changing how cars are built, how they handle the road, and how they keep passengers safe. But as onboard intelligence expands, so do the risks. AI systems that read sensors, chart routes, and personalize the driving experience also open the door to new threats, including everything from deceived vision algorithms to corrupted training data. Engineers are racing to build safeguards while preserving trust in machines that are taking over more of the driving.
The security stakes are immense. “Integrating AI into vehicle systems introduces new vulnerabilities due to the complexity and interconnected nature of these technologies,” noted Patrick Tiquet, vice president of security and compliance at Keeper Security. “The most critical risks stem from the reliance on AI to process sensor data for autonomous driving. Adversarial attacks could manipulate AI systems to make unsafe decisions, jeopardizing vehicle safety with the potential for serious impacts on drivers, passengers, pedestrians, and property.”
This inherent duality of AI necessitates a careful examination of its distinct roles. “There’s AI in the design process itself, and then there’s the AI that goes in the final vehicle,” explained David Fritz, vice president of hybrid-physical and virtual systems, automotive and mil/aero at Siemens EDA. “Those are two separate things with two separate approaches.”
Each comes with its benefits, as well as challenges. “People are looking to use AI as a means to make those sensors and processes more effective and more capable,” observed Mike Borza, principal security technologist at Synopsys.
Virtual proving grounds
The bedrock enabling much of this AI-driven progress, particularly during development, is the concept of a digital twin, a dynamic, high-fidelity, physics-based virtual replica encompassing the vehicle’s mechanics, electronics, software, and interactions with a simulated environment.
Fritz illustrated its practical power using a Siemens demonstration involving a Ford Mach-E at CES. The physical SUV remained stationary while its systems reacted as if navigating a bustling virtual city, controlled entirely by its cloud-hosted twin executing the full software stack. “Because that’s all in the cloud, we could do quick turns,” he said. “We can find problems. We can easily debug it. We can look deep inside what’s happening, as opposed to whether all that software was running in the vehicle itself. That would be a very, very difficult thing to do.”
This grants engineers unprecedented visibility for rapid iteration and bug fixing. In addition, these virtual environments provide a safe, cost-effective arena for exploring extreme edge cases. “We could also put that vehicle through scenarios that were much too dangerous to do in the physical world,” Fritz said, such as simulating component failures or hazardous traffic encounters.
To ensure AI-driven changes help rather than harm, Fritz asked, “If something is developed using some artificial intelligence, how do you know that didn’t make things worse or didn’t break things?”
The answer lies in the validated digital twin. On this trusted virtual platform, AI can safely explore design options and optimize for goals like safety, range, cost, or performance, with confidence that improvements will hold up in the real world.
AI in the trenches
AI’s strengths become especially clear when applied to how vehicles perceive their surroundings. In modern digital twin setups, AI plays a central role in simulating and testing these perception systems. “The most common cases where AI is used these days are in perception,” said Fritz. “We’re looking at raw data from lidar or camera or radar, and it gets processed by the perception stack, which has AI in it.”
Accurate perception, while challenging, is just the input. Subsequent planning and control modules must translate this understanding into safe driving actions.
“Understanding what is around you is critical but trying to decide what to do based on the current context, that’s much more difficult,” he said. “The way that’s done today is that you would run scenarios. You would train the AI in this digital twin environment. Then you take that artificial network and put it in the physical vehicle itself and verify that there’s correlation between what’s happening physically and what happens in the virtual world.” This continuous loop defines modern automotive verification and validation (V&V).
Generative AI significantly amplifies the effectiveness of this virtual testing. According to Adiel Bahrouch, director of business development at Silicon IP at Rambus, the use of AI helps “accelerate the development and testing of ADAS applications by generating synthetic data that replicates real-world conditions and driving scenarios.”
AI: from microcontrollers to macro-architecture
AI’s impact extends deep into vehicle performance and efficiency. “One thing that’s very different when you include AI is that the software workload is nothing like what it’s been. When you add AI, the workload shifts to more computations that do the artificial neural network inferencing,” Fritz said.
Advanced applications, such as having neural networks in fuel injectors, can enable millisecond-level adjustments for peak combustion efficiency. And for hybrids and EVs, AI dynamically optimizes energy flows. AI also can adapt to other situations that aren’t necessarily immediately programmed in, but which could be deduced by a neural network to optimize results.
Bahrouch described a system that uses detailed modeling to “optimize energy efficiency by predicting power consumption and recommending the best routes based on driving profiles, driver habits, weather conditions, and traffic data.”
AI is viewed as an important technology for advanced automotive applications, especially autonomous driving, sensor fusion, and processing complex sensor data. Robert Schweiger, group director for automotive solutions at Cadence, said that for sensor fusion, Cadence has developed a neural processing unit that uses AI to analyze point cloud data and identify objects. Then, for sensor processing, AI functions are used for processing vision and radar data. AI also plays a role in automotive zonal architectures, where AI is part of the software-defined vehicle ecosystem.
Further, end-to-end AI is expected to enhance automated driving systems with generative AI technology, which is rapidly being adopted in end-to-end models today. The promise is to address scalability barriers faced by autonomous driving (AD) software architectures. “With end-to-end self-supervised learning, AD systems are more capable of generalizing to cope with previously unseen scenarios,” said Dipti Vachani, senior vice president and general manager of Automotive Line of Business at Arm. “This novel approach promises an effective way of enabling faster scaling of operational design domains (ODD), making it quicker and cheaper to deploy AD technology from the highway to urban areas.”
At a high level, vehicle ADAS, IVI, and other systems are architected using various sensors, displays, connectivity and routing protocols, and most importantly, a processing platform. “All of this comes together to constitute a system(s) responsible for safety, connectivity, entertainment, user interface, and eventually a user experience, creating differentiation between various vehicle brands and driving the demand in the market,” noted Amol Borkar, senior director of product, head of computer vision/AI products at Cadence.
These components include the following:
Refining the ride
Other AI-based applications are being rolled out in the industry’s relentless march toward full autonomy. For example, Driver Monitoring Systems (DMS) use image analysis to catch signs of drowsiness or distraction. Some systems also are exploring “emotional AI,” which analyzes voice or facial cues to adapt the cabin environment, adjusting lighting or temperature in response to stress indicators.
Fritz predicts autonomous shuttles that handle passenger interactions independently, even alerting passengers that they have left something behind. One idea includes projecting soothing scenes onto the windshield during dull rides. However, this kind of personalization is complicated by numerous interacting factors, such as lighting conditions, vehicle speed, road type, and individual passenger sensitivity.
“Drivers need to know when AI is in control and when they need to step in,” said Borza. “Blurred boundaries lead to over-reliance, or worse, hesitation.”
Bahrouch noted that generative AI also powers advanced “virtual companions,” offering “engaging content and human-like interactions.” These systems handle tasks like navigation, messaging, or suggesting charging stops. Meanwhile, AI-driven sensor fusion pulls together data from cameras, radar, lidar, and thermal imaging to boost ADAS performance in all kinds of conditions.
Deconstructing AI’s deepening automotive security vulnerabilities
AI also will have a significant impact on security in vehicles.
The intricate web of AI, sensors, and connectivity creates a vastly expanded attack surface within a vehicle. Security experts express significant concerns. “Integrating AI into vehicle systems introduces new vulnerabilities,” said Keeper’s Tiquet.
One of the biggest risks is tricking the AI into seeing something that isn’t there or missing something that is. Sensors can be spoofed by feeding them false data. In one scenario, a hacker potentially could cause a crash by making it look as if there is no car in front of yours. Tiquet called these “adversarial attacks” on sensor processing some of the “most critical risks.”
Borza said it’s already possible to manipulate training data so a system doesn’t recognize stop signs anymore. “You have to be concerned about whether the training data is complete enough and is representative of the real environment,” he said. “If it’s not, then you have untested corner cases. [Training data] needs to be secure and authentic, and you need to make sure it hasn’t been manipulated because that’s a way to put Trojan horses into the AI system.”
Encounters with such out-of-distribution situations can cause erratic AI behavior. Data poisoning, the subtle introduction of manipulated samples into training datasets, represents an insidious threat. “Without robust security measures, AI-driven systems are vulnerable to exploitation through data poisoning or software manipulation,” Tiquet said. This could create hidden backdoors or targeted failures.
AI doesn’t always need outside interference to fail. It can make mistakes on its own, especially when facing something it wasn’t trained to recognize. In those cases, it might misread the scene entirely, mistaking a dog for a cow, for example. “Those hallucinations are certainly concerning,” said Fritz, arguing that logging billions of test miles isn’t enough. The real answer, he said, is a more disciplined engineering approach that uses digital twins and AI to create scenarios “a human would not think of.”
Another lurking danger is silent data corruption — undetected errors that can alter stored model weights or critical data, but which are exceeding hard to pinpoint because faulty behavior only occurs sometimes and not others. “If the model changes without authorization the functionality is changed unpredictably,” Borza said. “That’s a very scary situation.”
Even single-bit flips can cascade. Noting that a massive simulation might statistically “overwhelm” some random training errors, Fritz compared it to tennis technique, where relentless practice of the correct technique eventually overrides the incorrect habit. Still, protecting deployed systems requires robust hardware ECC and software integrity checks.
“Ultimately, all of the personality is defined by software, and the model is a kind of software,” Borza said. “If that doesn’t have integrity, the model is being perverted.”
Vision systems, synthesizing data from multiple sensors, are particularly sensitive.
Multi-layered defenses plus AI
AI may open new doors for attackers, but it’s also one of the best tools for keeping them out. Traditional security methods cannot keep up with the speed and complexity of modern threats. “AI plays a crucial role in detecting and preventing cyber-attacks in real time,” Tiquet said. AI-powered intrusion detection systems are a big part of that effort. “By continuously monitoring vehicle networks and analyzing data patterns,” these systems can identify anomalies, unauthorized access attempts, or signs of manipulation.”
Architecturally, these platforms employ hardened security zones, which function like a safety island. This secure enclave is more capable than error correction and cyclical redundancy checks. It can still process critical data. It also can utilize AI that’s trained to search for clues involving intrusions. But for the most critical systems, the AI is paired with hardware redundancy. Typically, this involves dual computations. If the results don’t match, it’s an indication of a problem somewhere, which could be a malicious attack or a gamma ray. In these cases, a vehicle will pull to the side of the road and gracefully fail, as required by current automotive standards.
AI is also critical in securing vehicle-to-everything (V2X) communications, especially when external signals pass through the vehicle’s safety systems. But sorting out conflicting information, such as between a smart traffic light and an onboard sensor, is no easy task. “You could have a stoplight telling you there is a fire truck coming, but another vehicle says, ‘No, there’s not.’ Those situations are really complicated,” Fritz said. “You need AI. And it’s going to take some time to train it so that you’ve got five nines accuracy.”
Neural processing units (NPUs) power real-time AI defenses more efficiently than traditional GPUs. Larger NPUs now can run complex models without losing accuracy, and advanced multitasking lets them handle multiple AI jobs simultaneously. “Those are much lower-power,” he said. “They’re very high-performance, allowing computation in real-time with a reasonable amount of power consumption.
On top of all of this, running AI at the edge — for example, smart cameras doing local detection — adds protection by keeping data processing close to the source. Still, Tiquet said: “AI is not a standalone solution. It requires human oversight.”
Functional monitoring
Typically with automotive, safety and security are deployed and are managed due to the repeatable and predictable nature of the operation of automotive systems.
“With the introduction of AI into automotive system this somewhat disrupts the nature of repeatable and predictable,” said Lee Harrison, director of Tessent automotive IC solutions at Siemens EDA. “One approach to address this challenge is to use functional monitoring. Enabling a monitoring path that bypasses the AI network, provides a repeatable and predictable outcome. This does not have to have any feed forward into the functional system and can be used entirely for just safety and security.”
Then, the output of the functional monitoring element is analyzed against the output from the AI element. “The AI element can be checked to make sure that it is operating within defined boundaries.”
A use case example could be that the AI element is determining the optimal speed to travel on a particular road using the cameras and sensors to control the acceleration and braking. At the same time, the functional monitoring is just being used to carry out some basic checks to ensure, for example, the camera is showing the vehicle to be less than a critical distance from the vehicle in front, so the AI element is not instructing the vehicle to accelerate hard. This is meant to ensure the AI element is operating within the given boundaries it is given.
“This basic sanity check ensures that if the AI elements response within the vehicle systems were hacked, the vehicle would have a set of safeguards to override the operation once an alarm was triggered to maintain the vehicles safety by maintaining its security,” Harrison said.
Fig.1: Functional monitoring system. Source: Siemens EDA
Maintenance, supply chains, and constant vigilance
AI is also helping cars take better care of themselves. Rambus’ Bahrouch described models trained on “historical vehicle data, real-time vehicle data, and driver behavioral profiles” to “identify patterns, predict potential failures and provide reliable recommendations.” The goal: smarter, condition-based maintenance before things break down.
The complexity demands a secure supply chain. “Automotive is no longer just vertical,” Fritz observed. “It’s a neural network of dependencies.” A flaw introduced by one supplier can cascade. Initiatives like the “AI bill of materials” aim for traceability and integrity verification.
Still, for all its promise, AI in vehicles still faces serious hurdles, with many of them rooted in how the industry balances innovation with risk. Borza pointed to a “disconnect between security awareness and the actual implementation,” especially in price-sensitive markets where cybersecurity is often an afterthought compared to safety features. He also warned of a broader race to the bottom in consumer electronics, where lax security standards have led to large-scale vulnerabilities, such as the rise of router botnets. And even seemingly small AI failures can have an outsized impact, like a recalled parking assistant that scraped bumpers and sparked a flood of negative publicity.
As AI becomes more deeply embedded in vehicles, keeping it secure isn’t a box to check—it’s an ongoing challenge. “Security is not a destination—it’s an ongoing negotiation with uncertainty,” Borza said. “The challenge is building AI systems that operate securely and evolve securely.”
Conclusion
As artificial intelligence becomes more embedded in how we drive, design, and interact with vehicles, the auto industry faces a delicate balancing act — accelerating innovation without outpacing safety. Engineers are betting that a fusion of digital twins, smarter chips, and disciplined simulation can maintain that balance. But for now, building trust between humans and machines, and between virtual tests and real roads, remains the most critical system under development.
Related Reading
ADAS Adds Complexity To Automotive Sensor Fusion
Advancements in combining sensors enabling intelligent, distributed processing and standardized communication of object data.
Radar, AI, And Increasing Autonomy Are Redefining Auto IC Designs
Adding more intelligence into vehicles is increasing reliance on some technologies that were sidelined in the past.
Automotive OEMs Face Multiple Technology Adoption Challenges
The path to fully autonomous vehicles may be clear in concept, but fully realizing that development environment is another story.
Leave a Reply