Each new level of assistance and autonomy adds new requirements and problems, some of which don’t have viable solutions today.
Chipmakers and test/validation companies are helping lead the effort to develop self-driving cars, but they are facing a wide range of technical and even cultural barriers.
Advanced driver assist systems (ADAS) already are the most complex systems by far in modern cars, the best of which hover between Level 2 and Level 3 on the five-step autonomy ladder maintained by the Society of Automotive Engineers (SAE) since 2016. To get to Level 3 and eventually Level 4 will require deep-learning and real-time decision-making, incorporating data from LiDAR, sonar, radar, vision systems, navigation, vehicle recognition and pedestrian recognition.
Reaching those upper levels also will require levels of compute power normally found in highly controlled environments. Autonomous or not, passenger cars are hostile environments. All of this sensitive circuitry has to run reliably and well after years of shaking, baking, freezing and shocking. No datacenter hardware could be expected to survive this for very long.
“ADAS systems represent the most severe requirements for reliability, and because they have to survive 15 years or more to meet requirements for electronic components of autonomous vehicles,” according to according to Norman Chang, chief technologist at ANSYS. “This is completely different from mobile systems, or HPC systems.”
Because autonomous vehicles (AVs) are cars that generate their own heat and sit outside in the sun, the Automotive Electronics Council (AEC) requires automotive electronics that can survive temperature ranges between -40°C and 150°C, for example. They also require resistance to voltage variations, limited susceptibility to electrostatic discharge, electromagnetic interference, and good citizenship points for electromagnetic compatibility.
The cost of mistakes adds up too fast to allow any semiconductor company to be successful that does not enforce high standards of design, manufacturing and process control, according to Tom Anderson, technical marketing consultant at OneSpin Solutions.
The biggest challenge is to meet automotive standards like ISO 26262, whose reliability requirements are routine for chipmakers supplying the aerospace industry, defense contractors and makers of implantable medical devices. But they are relatively new to most others, said Anderson.
Fig. 1: Functional safety management according to ISO CD 26262. Source: International Standards Organization
Cars historically have been considered among the most hostile environments, and the list of electromagnetic compatibility standards is long. Even the language describing safety and reliability concepts, largely defined by ISO standard 26262, is much different in automotive than computer-industry contexts.
Cars that drive themselves require a level of safety that goes beyond how a component operates and incorporates how the whole vehicle will behave. That, in turn, affects how the individual components will behave and the stresses they will have to deal with, said David Hall, chief marketer for semiconductors at National Instruments. “If you look at LiDAR, that’s a reuse of existing technology. But when it comes to testing, there are a lot of new requirements around optoelectronics. You normally turn the laser on for a short time and measure the power draw. But if you integrate power over a long time, the heat dissipation is high, so you have to test to for that, too.”
Testing for functional safety adds a new dimension to semiconductor testing, and autonomy boosts that requirement further because there is no human in the loop to improvise if something goes wrong. The result is that if there is a 10nm chip being used in the artificial intelligence brain of an autonomous vehicle, testing 95% or more of the transistors on a chip is no longer considered acceptable.
But each additional percent of coverage and reliability adds time and cost to the test process. And while carmakers traditionally spent more time testing critical systems more than others, two things have changed. First, they increasingly are looking at using non-critical systems as a failover mechanism in case a critical system fails because it’s less expensive than adding full redundancy. And second, advanced-node designs are being used in automotive applications because they typically run the fastest using the least amount of power.
“There is not a sufficient amount of testing being done yet,” said Anil Bhalla, senior manager of marketing and sales at Astronics Test Systems. “To do it right, the testing itself will take longer. One way to deal with that is to prioritize yield for different chips, but with that approach you run the risk of multiple failures. Long-term, that may result in cost reduction, but first you have to figure out where you do the test and what exactly has to be tested. And new devices will need system-level test, where you have very precise thermal analysis.”
This transition isn’t going to happen overnight, however. “Test evolutions are typically multi-year,” Bhalla noted.
And that’s assuming everything stays the way it is today. The supply chain itself is becoming more complex, and so are the devices and materials flowing through it. Substrates include bulk CMOS, RF-SOI, FD-SOI, silicon germanium and possibly new materials for the AI chips. There also are more chip architectures, including discrete and embedded FPGAs, new types of microcontrollers, multi-core CPUs and GPUs, sometimes bundled into SoCs, as well as a variety of memory chips. And there are many more hardware-software interactions that need to be understood and tested for possible glitches in unexpected corner cases, as well as their impact on temperature, utilization of resources such as processor cycles and memory, and how those may be affected by everything from unexpected noise or premature circuit aging to a particle of radiation hitting a critical device.
Until about three or four years ago, the market for automotive semiconductors was small enough that few test/validation providers put the effort into developing complex new test cases, methodologies and equipment.
“There were four big companies in the market – STMicroelectronics, NXP, Infineon and Renesas – but the market wasn’t big enough to do all this complicated verification compared to the return on working with Apple on the next iPhone,” said Dave Kelf, vice president of marketing for Breker Verification Systems. “Now all the big chip companies have jumped in, and the chips have to be much more complex and capable to handle sensors and machine-learning, so the chips are much larger and the flows designed for chips that were smaller and simpler is breaking down. So now everyone is trying to figure out how to meet these requirements that are much wider and more demanding.”
Before they can be installed, automotive semiconductors have to be demonstrated to be free of design or fabrication errors, and be free of flaws that would cause a safety risk when they malfunction or enter “an unexpected state,” according to ISO 26262 requirements. The methodology and language is different enough that Arteris IP published an ISO 26262 primer in 2014 and a 26262 primer in 2015.
The specification requires testing according to risk calculations such as the Automotive Safety Integrity Level (ASIL), which considers how likely a failure may be, how able a driver would be to recover from it and how severe the disaster if that weren’t possible.
Fig. 2: ISO 26262 asks: “If a failure arises, what will happen to the driver and associated road users?” Source: National Instruments
“All these problems are new—thermal, ESD, EMS—and it is even more difficult because many function modules of AI chips for deep learning are active all the time doing forward and backward propagation and powering features like image- and speech recognition, so the power consumption is always there,” Chang said. “The chips tend to be large, and when you look at the power consumption and analyze the RTL functions for power usage, you see even more problems in control of power and thermal issues.”
Fig. 3: Revenue by category for top 10 automotive chip suppliers. Source: Semiconductor Business Intelligence
Chipmakers divide into their own specialties, but testers have to cover the gamut, and verifying that a chip has no design errors during systematic testing is difficult enough, said OneSpin’s Anderson. Verifying it will perform safely during testing with random errors is a serious challenge.
“You have to make sure your product will either fail safely, or recognize the problem and correct it—even from something rare like being hit by an alpha particle,” Anderson said. “That’s rare, but anyone who works with satellites will say they expect their chip will get hit with an X-ray or alpha particle five minutes after they reach orbit. Whether it’s simulation testing, or emulation, it’s hard to test enough to know that even rare things like that won’t cause a dangerous failure.”
Chipmakers and testing providers may have a long way to go to establish predictable, efficient design and verification processes for automotive chips, but that is where more of their market is shifting.
By the numbers
There is good reason to solve these issues. The automotive IC market will grow at a CAGR of 12.5% to an annual total of $43.6 billion per year by 2021, according to a May report from IC Insights. Digital chips will make up only 7.5% of the market in 2018, but that number will rise to 9.3% by 2021, according to the report. The firm estimated 45% of 2018 revenue would come from sales of general-purpose analog and application-specific automotive analog ICs, while MCUs will account for another 23%.
Fig. 4: IC Insights’ mid-year IC market forecast predicts automotive will grow twice as fast as the rest of the market, totaling $43.6 billion in 2021.
Leading drivers are electrification, connectivity, infotainment and ADAS systems, according to IDC, which predicted 9.6% growth in chips for the auto business, compared to a drop in revenue of 4% from the computer segment that will turn flat, dropping a total of 0.7%by 2022. By 2022, carmakers will rely on a supply chain as filled with data aggregators, integrators, software developers and chipmakers as it is of traditional part suppliers, according to a May report from Forrester.
Fig. 5: Automotive market segment revenue, Q3 2017. Source: Semiconductor Business Intelligence
The expansion of automakers’ supply chains is an opportunity for chipmakers to expand beyond the high-risk, short-cycle competition of the mobile and computer businesses into a wider ecosystem, according to KPMG Principal Scott Jones, in a presentation at Semicon West.
Nearly all new cars will have some level of connectivity and automation within a decade, but expectations for development and acceptance of real autonomy are imprecise enough that market penetration of SAE Level 4/5 vehicles could be anywhere between 5% and 26% in 15 to 20 years, according to the 2018 Global Automotive Supplier Study from financial consultancy Lazard, Roland Berger.
Testing changes
New chips are spreading through more than just the ADAS and infotainment systems, however. Every aspect of engine management is enabled by microcontrollers that feed performance data back into vehicle diagnostic systems and are spreading quickly through the entire drivetrain, adding layers of monitoring and control that boost safety and performance dramatically even without autonomy.
“There has been a lot of change in the industry that has put a lot of focus on the whole automotive supply chain,” said Derek Floyd, director of business development for Advantest. “When it gets down to test, there’s not a lot of difference. The automotive industry is focused on safety standards and traceability. They’re looking for parts per billion in terms of failure rates, but phone makers also have high standards for quality. One difference is in traceability over time. That affects the whole automotive supply chain and makes a big difference in those relationships compared to the consumer electronics world. If a part fails after five years, automakers want to be able to trace the device through the supply chain to be sure any fault can be rectified in the future. They want traceability even through 10, 20 years on the market, because no one expects a phone to last 10+ years but everyone expects a car to.”
Chipmakers selling to computer-industry OEMs might replace their top-end product with a new one every 18 to 24 months, and obsolete the original in three to four years. Chipmakers selling to automakers have to plan to keep making the same components for years, unchanged except for minor corrections.
That means preserving the designs, fabrication facilities, equipment to package chipsets and making sure any third-party IP also will be available for the full run of the product. Test companies have to maintain the same test capabilities and equipment for a decade or two, as well, to test new vehicles, spare parts and test components as vehicles go through regular maintenance. And that needs to continue for at least 10 years and, in the case of a successful vehicle or successful component, possibly 20 years.
Supply-chain relationships are much longer, much more interdependent, and carry a much longer commitment to both the customer and to individual products than is typical in the computer business. That kind of long-term commitment is unusual in the computer business, but offers tremendous revenue potential with the right adjustments in business plan.
“Automotive customers will request, when they’re looking at suppliers, that you show you have a development program that will be in place for 10 years,” said Floyd. “Most test equipment is used for more like 20. If you got a product approved today it would probably be for the 2021 models of cars, and that’s after a significant qualification process. So automotive obviously isn’t driving super innovation at the pace of the consumer market. And you don’t get to volume [sales] for one, maybe two or three years. When you do go to volume, though, you’ll see more sales from spares, and you’ll probably see that chip proliferate into other models from the same manufacturer. When they do replace something, it’s never all-in-one. They’ll replace one component this model year and another the next, so it takes six or seven years to completely replace the electronics in one system. It’s a very long-term commitment.”
Fig. 6: Automotive electronic systems. Source: Clemson Univ. Vehicular Electronics Laboratory
—Ed Sperling contributed to this report.
Great article. Is there a similar article on Test issue for AI chips?
The ultimate resolution essentially warrants a gigantic FMEA spreadsheet, for auto chip makers and OSATs, with an emphasis on distinguishing between mission-critical (a.k.a. safety-related) and non-critical design elements, as well as the corresponding failure modes.
For example, apply HAST (Highly Accelerated Temperature/Humidity Stress Test) standards to qualify non- or less critical specifications and performance parameters. Apply the more persistent and hence, more time-consuming and more costly, THB (Temperature Humidity Bias) testing criteria to quality safety-related, critical design elements and control parameters.
A THB test cycle typically takes at a minimum 1000 hours to complete, whereas HAST test results are usually available within 96~100 hours. In some cases, HAST results can be made available in even less than 96 hours. Thanks to its time/cost-saving “advantage”, HAST has gained popularity over the past two decades, especially in industries such as Medical, Industrial, and Telecommunications.
Quite a few medical and industrial companies have completely replaced THB Test Chambers with HAST ones; the latter are often 10 times cheaper than the former…
THB usually maintains the standard 85°C/85% “temperature/relative humidity” (T/RH) test condition, while applying continual electric loads to the DUTs. In comparison, HAST testing utilizes a higher temperature (105°C), higher relative Humidity (between 85% and 90%), as well as a fairly high atmospheric pressure (up to 4atm) to establish its accelerated test condition.
One suggestion: For future follow-ups, if any, please consider interviewing companies such as Keysight, TÜV SÜD, TE Connectivity, Tyco, or even Bosch, who should be well versed in subject matters including but not limited to, Electronic Control Unit (ECU) testing, Hardware-in-Loop (HIL) system testing, CAN Bus testing, MODIS, Vantage PRO, VERUS, etc.
I feel we have probably been here before with aircraft electronics, and the problem is being somewhat overstated. Aircraft are in service far longer than cars, run in harsher environments (temperature-wise) and have much more serious consequences for failure.
Aircraft have also been running on autopilot for a long time – I seem to recall Concord landing on autopilot rather than manually because it was cheaper in fuel.