How To Cut Verification Costs For IoT

The verification problem needs to be looked at differently for IoT devices, but not everyone agrees on where to start or how it should be done.

popularity

Cost is one of the main factors limiting proliferation of the (IoT), and when looking at the design and Verification methodologies in place today, verification is a prime candidate for closer inspection. For today’s complex SoCs, the cost of verification has been rising faster than design and it has been identified as one of the areas in which new methodologies may be appropriate for the types of design seen on the edge of the IoT (See IoT Demands Correct By Construction Assembly).

While many of the verification challenges for edge IoT devices are similar to those of the complex SoC, others are significantly different. The similarities are that they involve both hardware and software—both have critical requirements when it comes to power and both are mixed-signal designs. But that is where the similarities end. IoT devices are much smaller, tend to use standardized IP components and interconnect structures, and many designs are likely to be small variants of highly leveraged platform architectures. Semiconductor Engineering asked the industry for ideas as to how the cost of verification for these devices could be significantly reduced.

There are several ways to start looking at the problem, according to Michael Sanie, senior director of verification marketing for Synopsys. “We could look at the methodology to make it more efficient. It is not really about EDA tool cost, it is really time spent by the engineers – manpower costs. The challenge is how we make the tools or methodologies more productive. More specifically, can I make a methodology that is specific to low power, which also has increased analog content?”

Several people in the industry believe that now is the time we should be looking toward for the bulk of the answers. “The IoT platform will become the poster-child of the transition to formal methods as a primary verification solution,” says Dave Kelf, director of marketing for OneSpin Solutions. He highlights the example of Renesas, which used a methodology whereby pre-verified IP containing assertions that monitored critical functions along with integration connections was used to replace simulation. “A simulation- or emulation-based verification methodology requires a comprehensive testbench to be written for every platform configuration. A formal solution relies on the assertions within each IP block. As such, a reusable, configurable platform verification methodology may be constructed that requires no testbenches and will execute almost an order of magnitude faster.”

Others are looking at formal as a way to modify the design process. “Formal technology will become the designer’s best friend,” asserts Jin Zhang, senior director of marketing for Oski Technology. “It will help them understand the impact of certain lines of code in the RTL, catching bugs early, verifying interface assumptions between blocks and ensuring there are no corner case bugs left when the design is finished.”

Sanie looks in multiple directions to see where savings can be made. “The real challenge is – how do we find bugs earlier. Can we start looking at fresh code and finding bugs using more static and formal technology? Today 35% to 50% of the time spent on verification is spent on debug. The time spent on creating testbenches should also be examined. If I look at transactions rather than pin wiggling I get more bang for the buck on simulation time.”

Pranav Ashar, chief technology officer for Real Intent sees an analogy with Internet capabilities. “The probability of unforeseen mashups is high enough that rigorous verification of use cases is essential. One approach is to model acceptable input behaviors of potential mashups as automata and verify that the ‘thing’ being designed operates correctly in that domain. This is a classic formal verification problem and is used widely in hardware verification.”

Many in the industry see existing tools still having a significant role to play, along with some extensions. “The verification tools for hardware/software co-verification are limited to emulators,” says Lauro Rizzatti, a verification expert, “and that’s not such a bad thing when streamlining and cost effectiveness are considerations. I’m certain that the verification and debug of any chip destined for an IoT application was done with hardware emulation, especially when time to market is a factor.”

But we need to dig a little deeper to see their real needs. “They do not have the same kind of need for things like emulation,” says Frank Schirrmeister, senior director at Cadence. “They have additional challenges in analog/mixed-signal and we need to look at connecting analog components into emulation. You don’t want to be using ”SPICE” coupled to cycle-based emulation. You also have to make sure that the sensors, coupled to a microprocessor, can deliver all of the right data in the right format.”

Tom Anderson, vice president of marketing at sees another problem with the traditional verification flows. “One of the reasons that there are so many verification steps today is that there is limited opportunity for verification reuse, both ‘vertically’ from block to subsystem to full chip, and ‘horizontally’ from ESL and RTL simulation to hardware platforms to silicon.”

Stephen Bailey, director of emerging technologies within the design verification and test division of Mentor Graphics lays out the foundation for the verification problems and opportunities. “On the design side, IP reuse has gone a long way to drive down the cost of design. We also have verification reuse but if this is done at the same level that people do design reuse, you are not going to get a big enough savings because it does not take into account the interaction between IPs. So we have to go beyond VIP notions of Ethernet, PCIe etc.”

Bailey points out that this will force the industry to look for more standardization across the entire platform that will be used for whole classes of devices for the IoT. “Then we can focus on providing a much higher level of verification IP – entire testbenches and verification environments. 90% to 95% of the device is standardized so we can concentrate on the value added functionality and its interactions with the other pieces. You do not have to verify how the others interact with each other.”

Others see opportunities associated with the notion of platforms and variants. “For devices that are variants of a previous, similar design, the issue is, ‘Can I isolate that base model from the incremental changes and can I guarantee that the changes do not mess with the remainder,’” says Schirrmeister. “If there are lots of variations, even on a small scale, automation could fit in nicely.”

Anderson sees an opportunity for graph-based verification when you have a series of closely related chips using many of the same blocks and subsystems. “These can be plugged together as needed for the various IoT applications, with full verification reuse from block to complete chip. There is no need to develop a new testbench at each level or to manually write C tests for the embedded processors.”

Schirrmeister raises the possibility of restricting the design to help verification so that it becomes a dataflow problem, something similar to the tools of a bygone era such as Bones (Block Oriented NEtwork Simulator). “This was a transaction-level queueing simulator. There were other tools for network analysis used to configure the network infrastructure.”

But do all IoT functions fit into a dataflow model? “There may be significant cases where they can,” says Chris Rowen, a Cadence fellow. “And if it can, there would be advantages. Dataflow models are creeping up in many places. An example is in the vision world (OpenDX standard), which provides for a dataflow description of vision algorithms. This allows a lot more correct-by-construction design that enables you to decompose a big problem into smaller problems and to distribute that amongst a number of parallel functional units.

Sanie reminds us that even the best ideas may not succeed “I see verification engineers as being the most paranoid people in the world. They will run RTL regardless. They cannot sleep at night unless they do. Even if they do more in terms of abstraction, they will be concerned about the inaccuracies and errors in the ways that we model things, or that we are not capturing some of the things happening inter-chip.”

Jeff Berkman, senior director of IC development at Echelon, takes us down to an even lower level of issue. “IIoT is a very hostile environment, especially with regard to communications, so issues such as SNR and dynamic range are big issues that we have to resolve. It is difficult to resolve all of these issues without test silicon because EDA tools do not tell us everything we need to know.”

What is clear is that there is a world of opportunity out there for verification in this space and “will force innovation within the EDA industry for a new level of system verification and validation,” concludes Bailey.



Leave a Reply


(Note: This name will be displayed publicly)