Where We Go From Here

Current technologies must evolve to meet the needs of sophisticated and complex embedded systems.

popularity

It is hard to argue against the evidence that the dynamics of modern software in embedded systems are making it nearly impractical for traditional approaches of cycle based simulation or emulation to survive as they’ve been while truly meeting the needs of hardware/software design teams.

While it is not a topic the EDA companies are completely comfortable addressing directly, the fact is that some methods of analyzing and verifying designs are running out of steam. Some have argued that just because the tools are there, does not mean they should be shoehorned where they don’t fit. In some cases, if there is no other option, perhaps a particular tool can be used until the right one comes along.

Interestingly, there is work happening within at least some of the industry’s smaller players, their customers and academia to, for example, extract more accurate timing and power information than a particular tool had originally been designed to provide in order to get more of this data, according to Simon Davidmann, CEO of Imperas Software.

He noted customers are all extending the tools to meet their needs. To this end, for the tools to be truly useful, they must run fast enough and be accurate enough while being able to run realistic scenarios.

Davidmann, among others in the industry, recognize that the needs of design and verification engineers are changing, and don’t believe the existing solutions of cycle accurate simulators and emulators give the customers the solutions they need.

“Of course the EDA vendors are going to say, ‘With our technology, this is the methodology you’ve got to use’. There’s nothing else they can say. It doesn’t always solve the customers’ problem — it might move them a little further in the right direction but before there were these very fast virtual platform simulators, it’s all they can offer and do as no one would have dreamt of simulating booting Android while doing performance/power analysis. You couldn’t do it on the Verilog HDL simulators, so you ended up building breadboards, FPGAs, using last year’s chip and all this sort of breadboard technology, but when virtual platforms started being able to run hundreds of millions of instructions a second, then suddenly users can boot Linux in a simulated embedded system on a host desktop so they can really start to do more analysis of what’s going on.”

Engineering teams want to use their real software, do power and performance analysis on it, and run the scenarios their product provides, he stressed. “The current offerings from industry says you might have five billion instructions to boot Linux, but we can actually simulate 5 million, so find which ones you want and use our $10 million emulator on it. That’s not really what the customer wants but he doesn’t have a choice today, and I think that’s the opportunity — to provide the high speed simulation with an appropriate amount of accuracy. That’s what users need in terms of power and performance analysis — then they will be able to analyze the scenarios of software running on their chips and be able to make decisions about which chips to use, which architectures to use, how they should repartition their software, whether they want to use this type of core or that type of core and other tradeoffs like that. Users are actively driving forward the next generation of software simulators that can accurately predict their power/performance results while running realistic software scenarios.”

Are some of these approaches and technologies available today? Perhaps. Is work going on to reach these goals? Absolutely, and that makes this a very interesting and exciting time to be a part of this dynamic industry.



Leave a Reply


(Note: This name will be displayed publicly)