Traditional simulation is running out of steam with autonomous vehicles and other complex systems. Now what?
With system design complexity set on a steady upward trajectory, there are situations in which traditional simulation just can’t keep up.
The alternative—and one being used by Google, Uber, Ford, GM, Volvo, Audi and others with autonomous vehicles— is to test cars on the road and collect data for later analysis.
“They’re not simulating, they’re just doing it all in the real world because they can’t simulate those billions and billions of things going on,” said Simon Davidmann, CEO of Imperas Software. “There is absolutely a point where you can no longer simulate what’s going on because there’s just too much complexity and too much data. A car may have 50 processors running, each one running at a certain speed. You can do a simulation of 50 of those things, but it does get very complicated and complex. If you can build it, you can simulate it, but at some point you might as well use real world stuff. If you’ve got it going enough, turn to the real thing, go drive it and see what happens. But make sure you have a crash helmet on.”
As autonomous vehicles progress toward reality, the need to validate and test these complex systems must take place at multiple levels.
“If you want to test all of the software across all of the processors at the same time, to do it at 100% accuracy just isn’t possible,” said Neil Parris, senior product manager at ARM. “You can’t run a simulator fast enough. Maybe you can do some stuff on emulation, but speed is going to be a challenge there. So what a lot of engineering teams will do is run it on some kind of fast model like a programmer’s view model, so you’ve got a software view where you can test all the key software components running within that platform and check the interactions of the pieces of software for a high-level simulation view.”
SoC designs are highly scalable, with multiple cores and graphics processors. With this kind of heterogeneity, it is relatively easy to overload the simulator with simulations that take days, rather than hourse, to run.
“Simulation time also depends on the description abstraction level,” said Zibi Zalewski, general manager of the Hardware Division at Aldec. “RTL-behavioral (synthesizable) will simulate faster than structural gate level. Functionally control logic/FSM will simulate faster than computation intensive data paths (DSPs).”
As such, simulators for big projects are commonly used for module and IP level verification, while hardware verification methodologies are covering SoC system level. But today, even sub-systems are becoming very advanced units that cause the simulation to take a day or longer, complicated by the fact that the scope of testing is increasing too, he said. And while UVM verification, which has become a standard for big ASIC and FPGA projects, is a great tool in the verification engineer hands, it requires significant simulation power.
Fortunately, there are multiple options available in case simulation becomes a bottleneck because of design and testing environment complexity, according to Zalewski. “When it comes to sub-system verification where UVM testbench is testing the IP module, the natural step is to use simulation acceleration. With the latest emulation compilers supporting SCE-MI modes like function-based, and pipes-based, it becomes almost seamless to move from UVM simulation to UVM acceleration. Such migration offloads the simulator and keeps the same testing environment. And thanks to shortening simulation time, it is possible to increase the scope and coverage of testing, so that problems not even visible during traditional simulation are now in range to detect and fix.”
At the SoC level, approaches such as hybrid emulation can be used, which allows the combination of a virtual platform or SystemC simulator with an emulator, he suggested. Such a verification methodology splits the SoC between those two tools and provides the verification environment even when there is only partial RTL code available.
Sizing up the problem
Still, defining what ‘too big’ means is not a straightforward equation. It varies by design, by application, and by company.
“’Too big’ could be based on the gate-counted size of the design, but in terms of simulation probably a better way is to measure the time required to run the whole test case set,” Zalewski noted. “It may be several, but even hundreds of tests to be executed, and when it takes more than an overnight run it makes the verification team wait for the results, delaying the whole project schedule,” Zalewski noted.
Some designs require simulation of post-place & route netlists with timing annotation, which can make even medium-sized designs ‘too big’ in terms of gate count. In this case, it might be more useful to validate design, replaying RTL simulation test vectors at speed in the target chip. That, in turn, shortens the testing time to seconds and provides a real system-like verification environment.
Frank Schirrmeister, senior group director, product management in the System & Verification Group at Cadence, suggested the approach comes down to the level of abstraction. At the level of complexity seen in automotive today, simulation is not done at the same level of accuracy as would be used for RTL execution.
He pointed to the whole-network simulator Boeing developed for one of its planes. That involves a very complex task in which different system components are abstracted away, and suddenly the server becomes the node in the system, Schirrmeister said. “You are now touching on the levels of simulation as you would simulate an IT network, such as between the buildings on a campus—what bandwidth is going from building to building, how much fiber needs to be put in the ground, how do the switches need to be configured, what data traffic do the switches need to be able to get to? Those are the type of IT questions asked, and now you are abstracting the level of abstraction.”
Perhaps the most complex engineering challenge in the past couple years involves vision processing. Because it is so pervasive in the automotive market, as well as other markets such as virtual reality and gaming, its growth has a direct bearing on simulation.
“We all take [vision] so much for granted because we do it all the time, but the reality is that it is excruciatingly complex,” said Mike Thompson, senior manager of product marketing for the DesignWare ARC Processors at Synopsys. “The vision processors that we are building are far and away the most complex I’ve been involved with in 35 years in this business, and it is just fascinating to watch the underlying capabilities develop.”
But vision is only one component of an autonomous vehicle. As the electronic complexity of a car increases, interactions become potentially more problematic.
“To some degree, like in a car, you don’t really necessarily have to look at the ABS braking system in relation to the throttle control system, even though there are obviously some links there,” said Thompson. “You don’t want to be flooring the car while you are applying the brakes. But there certainly is a desire to look at how the ECUs interact with the ABS system, and these are all becoming complex systems in their own right.”
From his vantage point, traditional simulation approaches haven’t quite run out of steam, especially because the software has been written in such a way that it can be spread over many servers. In fact, when Synopsys runs verification on the processors it is developing, it runs for months with literally trillions of instructions executed across thousands of servers on a big server farm, Thompson noted.
But it’s not always so straightforward. Anush Mohandass, vice president of marketing and business development at NetSpeed Systems, noted that RTL simulation for a complex smartphone or an automotive chip in one engine is difficult and getting worse.
Like many others, Mohandass points to raising the abstraction level as an essential step. “If you want to go really detailed at a gate level, you simulate something small and then raise the abstraction level to what you are trying to simulate. If you are trying to simulate the entire chip, for example, you don’t send in all the bits and bytes, but rather simulate it with the certain things in mind. What are the traces? How does this chip work at that abstracted level? You simulate that. You simulate, let’s say, a motherboard in a car, and then you run the infotainment system. The key is having control over the abstraction level, and having models that represent those different abstraction levels. You can still simulate a car, except you’d have very simplified models of what different things look like. The key is getting a good handle on whether you have enough abstraction, so it can be run in a meaningful amount of time—and having enough detail that I can get insights from it.”
Still, this brings up the argument that abstracting too much kills the accuracy that can be necessary for various system analysis. And this is where deep learning and machine learning can come into play.
“When trying to model something as complex as a car, or something that goes into the car like an SoC, machine learning could be used to say there are a gazillion parameters,” he said. “That’s instead of humans trying to figure out the important parameters from the ones that aren’t. Let’s try to use something that’s out there like a machine learning-based algorithm to figure out what that is.”
At the end of the day, it seems unlikely that traditional simulation and related technologies will be thrown under the bus. But they will have to be augmented by newer technologies to deal with increasingly complex designs.
Seeing The Future Of Vision
Self-driving cars and other uses call for more sophisticated vision systems.
Embedded FPGAs Going Mainstream?
Programmable devices are being adopted in more market segments, but they still haven’t been included in major SoCs. That could change.
Emulation’s Footprint Grows
Why emulators are suddenly indispensable to a growing number of companies, and what comes next.
A Formal Transformation
Exclusive: What happens when you put formal leaders from all over the world in the same room and give them an hour to discuss deployment?