Auto Chip Design, Test Changes Ahead

Which tools and methodologies will work best to ensure electronics operate for extended periods of time under harsh conditions?

popularity

The automotive industry’s unceasing demand for performance, coupled with larger and more complex processors, are driving broad changes in how electronics are designed, verified and tested.

What’s changing is that these systems, which include AI-oriented logic developed at the most advanced process nodes, need to last several times longer than traditional IT and consumer devices, and they need to work under conditions that even a year ago would have been considered unrealistic. This is forcing changes from one end of the supply chain to the other, raising questions about how this will impact time to market, cost, and which approaches ultimately will work best.

“The automotive and functional safety requirements in ISO 26262 are a whole different paradigm of chip development than we did before,” said Bryan Ramirez, strategic marketing manager at Mentor, a Siemens Business. “You have to give a different level of assurance that, your product can recover and operate normally, even if there is a particle from space or some other random event—or that it will fail safely. And you have to define what that safe failure is.”

This is the basis of ISO 26262 and advanced driver-assistance systems, which are aimed at increasing safety in autonomous and semi-autonomous vehicles.

“ADAS systems represent the most severe requirements for reliability, and they have to survive 15 years or more to meet requirements for electronic components of autonomous vehicles,” according to Norman Chang, chief technologist at ANSYS. “That’s a lot different than 2.5 years for mobile. There are issues with aging, NBTI and electromigration, which can be thermal-related. You can also see degradation in performance. It usually takes a lot of time to totally break down, but performance can be reduced along the way.”

Which approaches will work best under these conditions aren’t clear at this point, however. For one thing, much of this technology is brand new. There is no history to show how a 7nm logic chip will behave under extreme environmental conditions over a period of years. That means reliability needs to be simulated, defects and potential defects need to be discovered with various flavors of verification, and test strategies need to be developed early enough in the design process to make sure nothing falls through the cracks.

“Adding heat and durability to existing design/test flows increases the time required and the cost,” according to Anil Bhalla, senior manager of marketing and sales at Astronics Test Systems. “The automotive test flow is a function of complexity. Where do you want to test? Either there is more focus on system-level test, or you look at what you can shift around in the flow and what you want to catch that can cause failures. If you’re characterizing a device for -40° to 150°C, most of your effort is around qualification, not production. So do you do that in the flow, or move some to final test, or can some of that happen in wafer-level test?”

More time and more cost comes from the need to verify SoCs or components that may include the latest 7nm node sizes working in concert with components built on a host of other nodes, sometimes several generations older, Bhalla said.

Increasingly, OEMs and Tier-1 suppliers are demanding that automotive electronics have zero faults, which goes well beyond making sure that nothing is broken. It means a chip or module or system won’t do anything that will make a vehicle unsafe, even when something random and unexpected happens.

“You have to build in the ability to self-correct or fail safely,” said Derek Floyd, director of business development for Advantest. “You will need to validate the technology to get to level 5. That means power, analog, microcontrollers, sensors. You need to test LiDAR systems and safety sensors. And then you have to figure out which version you’re using for which product. On top of that, with automotive you need two or three suppliers.”

That complicates matters further, because various components from multiple vendors will behave differently under different operating conditions. Time is not always kind to chips or the systems in which they operate, but it’s far less kind when a car is baking in the sun or exposed to sub-zero temperatures.

“You can’t assume the part you create now will always operate correctly,” said Mentor’s Ramirez. “Silicon products will develop faults over time—whether it’s just from aging or particles from space or whatever. One of the things you have to do is provide a Failure-in-Time (FIT) rate showing how often that kind of transistor fails (in a billion hours of operation). But what it means to fail could differ different depending on the [Automotive Safety Integrity Level] ASIL level. You might have elements that operate at ASIL A in one component and ASIL D in another, so the demand and the testing to prevent failure would be different and you have to take that into account. The last thing you want would be for a particle from space or something random to cause an airbag to go off in your face.”

The nightmare for both chip and automakers is the 2013 Toyota sudden-acceleration problem, which generated more than 6,000 complaints and crashes that resulted in 89 deaths. Toyota tried to settle in 2013 with a $3 million settlement after analysis showed software in a faulty electronic throttle control could open the throttle all the way, causing the car to accelerate more quickly than a driver could compensate for even applying the brakes with full force. The settlement ultimately cost Toyota $1.2 billion.

“That’s part of the need for documentation,” Ramirez said. “It’s all about if the OEM can provide proof that they did everything they were required to do.”

Rearranging resources
Many companies making the leap into automotive have had to put a greater emphasis on verification and test, and they have been forced to rearrange resources, according to John Bongaarts, senior product manager for semiconductor test software at National Instruments.

“We’ve seen customers making a number of changes to functionalize verification,” Bongaarts said. “In the past, a large company might have had disparate design centers producing individual components and having testing done separately by just one person on each team. Now we see functionalization and standardization of that testing that puts testing on a team that handles a broad portfolio of products. We’ve seen other adaptations, too—companies using ATE systems for verification at times, for example.”

Jim Hogan, managing partner of Vista Ventures, LLC and a leading investor in EDA companies and technologies, calls the current era Verification 3.0—a stage defined by a hybrid of existing methodologies combining simulation, emulation and formal verification. Verification 1.0, according to Hogan, revolved around simulation running on individual workstations; Verification 2.0 added formal verification and emulation in discrete roles.

The elevated stakes are driving changes in the roles and priorities those methodologies, but the integration among tools to apply those methods and the efficiency of flow in applying them is still rough enough that the question of what Verification 3.0 will actually look like is still very much up in the air, Ramirez said.

More formal
The additional emphasis on formal verification makes sense, however due to the relative simplicity of translating the assertions that are critical to a formal verification’s tests and the requirements of standards like ISO 26262, according to Dave Kelf, vice president of marketing for Breker Verification Systems.

It is also a good match for its ability to provide mathematical proof and documentation that a particular function is present and will work as expected, and its ability to examine all the requirements of a standard and compare them against every possible response from the specific block of IP it is verifying, Kelf said.

“Formal creates a database of all the possible states this design can get into and all the ways to transition to the next state, on purpose or by some random occurrence. That’s why it’s so powerful for automotive—you can guarantee something won’t happen without relying on exhaustive simulation or emulation testing, where you get a model to run through every scenario and watch to see what happens,” Kelf said. “You can get up to testing a million gates, or 10,000, 20,000, 30,000 storage elements, but the number of combinations and the database are a limitation, so you end up testing blocks of IP that go together to make up an SoC rather than the whole system at once.”


Fig. 1: ASIL levels and applications. Source: MIPS

Others agree, but with some caveats. “Formal lets you go through all the possible scenarios, so it can also help you get to 100% code coverage faster [than simulation], where you have to run through all these scenarios to collect the result,” Ramirez said. “The challenge is how to scale it. You can break a product up into smaller hierarchical blocks and verify each individually, but these automotive chips are becoming bigger and more complex faster than formal is becoming more efficient.”

There has been enough standardization of assertion formats and enough tools available to help convert requirements to assertions that the verification team don’t have to write every bit of every set of assertions by hand using, according to Tom Anderson, technical marketing consultant at OneSpin Solutions.

What works best where
“It’s a significant amount of work to have to build an alternate model of a design using assertions,” Anderson said. “There’s a great benefit in being able to test every possible state, but there’s no question it’s a lot of work. And you still have to verify a byte at a time. With simulation you can cover a much wider base, but you also risk missing a lot of the corner cases that might turn out to be important.”

Simulation can cover a much wider part of a design than formal, but often uses more system resources due to the need to keep throwing new scenarios and possible faults at the model while hoping not to miss a corner case or special circumstance that could end up as a major fault, Anderson said.

Portable Stimulus and other high-level, abstracted approaches to defining assertions and translating requirements can make formal verification simpler, quicker and easier to manage, but not enough to counter the inherent, practical size limit on formal verification, Kelf said.

“You’re creating a database with every state a chip can get into and then you’re asking questions about that chip,” Kelf said. “Doing that across an entire chip is basically impossible. Most verification that people do is still with simulation and emulation. Formal was mostly used for smaller, very specific purposes, but it’s becoming more general-purpose.”

Portable stimulus helps automate and scale the verification process by taking a high-level view of the whole process and creating models that allow requirements to be translated more easily into formats required by formal, simulation and emulation test processes, Kelf said.

A lack of integrated toolsets capable of managing all three methods has made progress more difficult, but test-equipment and software developers have been focused during the past year or two on integrating capabilities well enough to allow verification to become a flow applied several times during the development cycle, not just once at the end.

In 2016, a Mentor-sponsored survey of verification methods and results showed that adoption of simulation-based methods including code coverage, assertions and functional coverage that had been growing quickly during the mid-2000s had flattened out, possibly due to the scaling limitations of simulation using these techniques the report suggested.

Use of automated formal applications—which had been considered very labor intensive when handled manually—grew 62% between 2012 and 2014; automated formal property checking grew 31% between 2014 and 2016, implying a shift toward formal verification but not at the cost of existing use of simulation.


Fig. 2: ASIC/IC dynamic verification adoption trends. Source: Mentor, a Siemens Business


Fig. 3: Verification engineers vs. design engineers. Source: Mentor, a Siemens Business

It is likely adoption of those techniques will flatten, as well, as growth in complexity and size of the IC to be verified outpaces the growing maturity of the automatic formal verification products.

The study also shows the number of engineers working at verification growing at 10.4% while designers grew only 3.6%—indicating that, efficient or not, scalable or not, demand for verification continues to grow at a healthy pace.

Growth in demand is only one piece of the puzzle, however.

“The big problem is how do we get these solutions and methodologies to scale to fully address functional safety testing of big chips,” Ramirez said. “The problems are being solved at the block level. Will they really ever work effectively with a 2-billion-gate ADAS chip? How do you address safety across the system, not just the chip? There are a lot of complexities that make it difficult to scale. I just don’t see right now how they’re going to accomplish that.”

Related Stories


Chip Aging Becomes Design Problem

Auto Chip Test Getting Harder




Leave a Reply


(Note: This name will be displayed publicly)