Autonomous Vehicles: IC Design Flow Walk Through

While basics of design remain the same, automotive ICs rely even more heavily on advanced tools.

popularity

Automotive applications, particularly those related to AI and computer vision, are a significant driver of the current semiconductor boom. Established companies are mostly thriving, it’s true, but perhaps more interesting are all the new faces in the game.

As usual, Mentor CEO Wally Rhines is one of the great sense-makers of the all this activity. Wally has been making the rounds at various industry events with one of his usual comprehensive decks that wraps hard data around the hunch many of us have that times are very good, particularly for startups. One anecdote among many that stands out — VC funding of fabless AI companies exceeded $1.3 billion in Q2, of this year, an increase of nearly 4.7x over the previous quarter and more than 9x the average quarterly rate of last six years. A version of the presentation is available here; also watch Wally’s recent interview with Dan Hutcheson of VLSI Research.

Given the froth, it seems a good time for a back-to-basics survey of key design and verification steps in automotive IC design, which despite some nuance, mostly follows flows familiar to those of us who have spent careers around the industry. Of course, Mentor, a Siemens Business, has something to say about nearly all of these design steps and trends.

What’s more exciting to me is to consider this portfolio alongside the breadth of other Siemens offerings, which truly can be said to span chip-to-city considerations of autonomous vehicles. I talk through some of these issues on a recent episode of our Future Car podcast, which we just launched to feature conversations with people working across these domains, geographies and industry segments.

For now, here’s a quick overview IC design for autonomous applications, at least as I see it. Am I right? What am I missing?

The big picture, and the importance of functional verification
For all the complexity, at a high level, the basics of the IC design and verification flow remain the same as always:

  • at the front-end of the flow, the main task is to specify the chip and generate the gates, ultimately made up of the tens of billions of transistors on today’s most advanced chips
  • at the back-end, it’s about designing for robust tests and volume manufacturing

This flow depends as never before on advanced software tools supporting model-based, abstracted design and integrated data flows.

Successful automotive IC projects are driven throughout the development cycle by requirements that define features and functionality. Those requirements translate into functional descriptions, typically using languages such as Verilog or C/C++. Virtual test benches are created to verify the functionality of the design. Test structures are automatically inserted at this early stage to enable testing after fabrication.

After IC layout comes physical verification via design rule checks (DRCs), done to ensure that chip layout satisfies all the necessary parameters. Next, design for manufacturing tools are used to manage yield, with techniques such as substituting higher yield cells or fault-tolerant vias, where possible. The IC is manufactured at the fab, the IC samples are tested, and yield analysis is performed to increase the number of functioning chips per wafer.

The keys to successful automotive IC creation are quick time-to-market, especially in the fast-moving self-driving car market, and creating a product that reliably meets specifications over its lifespan, which in the case of vehicles can be as long as a decade. Chips are expected to function in a wide range of harsh environmental conditions over years of operation. And since IC performance specifications can degrade over time, IC vendors must verify that these specifications stay within an accepted tolerance for the life of the IC.

IC designers must also respond to consumers, who, thanks to smart phones and consumer electronics, now expect a steady drumbeat of new automotive features. (A few years back my colleague Sjon Moore wrote a good paper describing this trend.) And as the driving functions are gradually handed off from humans to automated systems, the specter of liability looms as perhaps the greatest challenge of all. Chip designers must not only ensure functional safety compliance of their ICs, but they also need to have an eye on how their chips will be integrated into automated driving systems.

High-level synthesis, designing high-compute, low-power hardware for autonomous vehicles
The automotive industry is pushing IC design as never before and is arguably one of the biggest drivers of chip innovation today. Tier 1 suppliers and even OEMs are exerting more influence on the design of software and hardware generally since existing off-the-shelf solutions fall far short of what’s required for autonomous driving. In 2017, Elon Musk said that Tesla is designing its own chips to run Autopilot AI software.

Consider the increasingly complex combination of machine learning algorithms and sensors that comprise the autonomous vehicle nervous system, which all major car companies are seeking to create. This system is computationally expensive, requiring billions of operations per second for highly responsive, high-bandwidth real-time data processing. This system also must fit within a low-power envelope, possibly even under 100 watts. And these systems must have a certain degree of openness, too, since, on top of such table-stakes requirements, each supplier or carmaker also wants to add their “secret sauce” to have a differentiated solution.

Computer vision is a central design issue as image processing data rates continue to explode. A typical autonomous system might include up to six HD cameras, each generating 30 frames/second. For a back-of-the-envelope calculation: one 1920×1080 pixel image x 30 images x 3 colors (RGB) is 186,624,000 pixels/second from a single camera. Now consider that inevitably, more radar, LIDAR and cameras will be added in the coming years, and that camera resolutions and frame rates are expected to experience a Moore’s Law-like doubling every few years.

Meeting these sorts of design challenges requires nimble high-level synthesis (HLS) tools that allow designers to describe their ideas in high-level languages like C++ and SystemC and then automatically generate RTL code that can be consumed by downstream tools. Designers can thus quickly analyze many implementations and architectures to make design tradeoffs to achieve optimal power, performance, and area constraints.

Functional safety
ISO 26262 is a derivative of the more general IEC 61508 functional safety standard for electrical and electronic (E/E) systems in road vehicles. It is applied throughout the design, development and manufacturing cycles, and increasingly mediates relationships between automotive companies and their suppliers. ISO 26262 is targeted at vehicles up to 3,500 kilograms and seeks to minimize the potential hazards caused by malfunction or failure of the embedded E/E system.

ISO 26262 assumes that there is a clear line from the technical safety requirement to be fulfilled by an item (that is, a system that implements a function at the vehicle level for which ISO 26262 applies), through the implementation, verification and validation of the item. Autonomous systems and cars that think and learn for themselves create a conundrum for ISO 26262 as currently written. That’s because such systems, which are based on neural networks or other deep learning techniques, break the fundamental linkage between requirement and implementation. The standards committee is responding with an update that will accommodate the so-called safety of the intended functionality (SOTIF) of these “smart” E/E systems. See ISO/DPAS 2148 for more information. Mentor technical staff are on the committee updating the standard and available to provide a briefing on what to expect.

One particular area of ISO 26262 that has been lacking adequate solutions should be called out here – how to deal with random hardware faults. The standard requires that users understand how a design could fail from a random hardware fault, mitigate against those potential failures and then prove through metrics that the design is sufficiently safe. This is all to ensure that if an IC fails, it does so safely.

IC automotive test technology (the big picture)
Zero-defect, as any auto engineer will tell you, is acknowledged as an ongoing journey, not a final destination. Automotive IC companies have to show where they are on the path and what they are doing to continually improve quality and speed up failure analysis and diagnosis. And they must keep pace with failure rates that are now plunging from the parts-per-million to parts-per-billion levels.

For automotive ICs, final test (excluding the wafer level probe test) accounts for a high percentage of overall product cost. Depending on the chip configuration and die size, this testing can be anywhere up to 30% of the total product cost.

ICs get more complex with every generation and technology node. These chips often control safety-critical functions, and test software generally evolves unidirectionally — that is, to always add more tests with every new product release but never take any tests out. So how do you make this cost competitive? One way is to use statistical data and correlation to remove tests, an approach generally seen as a high-risk, low-reward.

Another option is to reduce fixed-cost test overhead, rather than just spreading it about. Doing this requires balancing IP tested on-chip using in-line self-test against what has to be run on the tester, burning expensive seconds of tester time. Getting the balance right means using a fault coverage simulator, which analyzes all the tests you are running, sees what kind of test coverage you have, and, importantly, identifies all the useless legacy test patterns.

This kind of simulation at an abstracted level can lead to significant reductions in test time (and reported cost savings in the tens of millions of dollars per year). A fault coverage simulator also identifies potential weaknesses in the test program, allowing even greater optimization of reliability metrics. And the benefits of this type of fault simulation are especially apparent in the mixed-signal domain; in automotive, analog is where the majority of field failures occur. And once the test software is optimized for coverage, it can then be digitally compressed by 10x or even 100x for further test time reduction.

Automotive-grade reliability and design rule checking
Electrical overstress (EOS) is one of the leading causes of IC failures across all semiconductor manufacturers, and is responsible for the vast majority of device failures and product returns. The use of multiple voltages increases the risk of EOS, so IC designers need to increase their diligence to ensure that thin-oxide digital transistors do not have direct or indirect paths to high-voltage portions of the design. And with autonomous vehicles, EOS is especially relevant (in the realm of ISO 26262 and otherwise) since there is simultaneously more semiconductor content in vehicles and more requirements for safety and reliability of this content.

What’s needed are tools for sophisticated reliability checking techniques, a unified rule deck, and integrated debugging environment to help designers eliminate the source of EOS failures while also enabling them to achieve the accurate and comprehensive verification necessary to ensure a repeatable and reliable design.

Our amusing current moment
Consider three typical ICs in a sensor-fusion ECU, each with very different design requirements, and different functions within the system. There is an astonishing amount of complexity, yet overall design cycles cannot be that much higher than a few years ago given competition and a limit to what consumers are willing to pay, even for advanced autonomous features.

For example, the overall sensor-fusion system these chips will power could also undergo rigorous testing where simulated, physics-based sensor data is fed into a sensor fusion platform for testing and improving the sense-compute-actuate algorithms that will drive the car, even when faced with difficult or just odd driving scenarios.

And there are countless odd scenarios. When Volvo tested its self-driving technology in the Australian outback, its system was befuddled by kangaroos jumping across the road. To the sensors, which used the ground as a reference point, the animals appeared further away in mid-flight then closer when they landed. At first blush, this sounds almost frivolous, but kangaroos cause more accidents than any other animal in Australia.

At each stage of this problem, from the design of chips and the overall sensor fusion systems to the modeling of full vehicle behavior and dynamics, and tweaking of the software algorithms, Siemens PLM and Mentor design tools can help. Few if any other companies in the world can make a similar claim.

For more information, check out the whitepaper Safety first – On meeting the only self-driving requirement that really matters.



Leave a Reply


(Note: This name will be displayed publicly)