Inconsistent results, integration issues, and lack of financial incentives to solve these problems point to continued problems for chipmakers.
Embedded software is becoming more critical in managing the power and performance of complex designs, but so far there is no consensus about the best way to approach it—and that’s creating problems.
Even with safety-critical standards such as DO-178C for aerospace and ISO 26262 for automotive, different groups of tool providers approach software from different vantage points. This produces inconsistent results, frequent incompatibilities that can make integration nightmarish, and it makes it hard to migrate technology from one project to the next or even from one design to a derivative design.
Rising complexity in heterogeneous SoC architectures only compounds the problem.
“In these new heterogeneous SoCs, you commonly have a mix of low power ARM Cortex M cores, integrated with high performance ARM Cortex A cores,” said Kathy Tufto, product manager in Mentor Graphics’ Embedded Systems Division. “To efficiently validate the performance of a heterogeneous system, developers need a single set of tools that allow for debugging of the entire consolidated system, and then validating the performance of that system. Traditionally, a lot of developers have used an RTOS or bare metal on these low-power cores, and may be running Linux or QNX on the high-performance cores. It can be quite challenging to integrate these different operating systems across a single set of tools, and to find a single set of tools that lets you validate the overall performance.”
Tufto noted that to verify software in a heterogeneous system, engineering teams need to be able to debug it using traditional methods, such as stepping through code a piece of the operating system, and also to visualize what’s happening across the different cores. “If an OS on this low-power device is doing something that’s blocking your high-performance user interface on the Cortex A processor from updating, you need some way to see that. But you can’t see that just from stepping through code.”
One of the big disconnects is that tool providers approach embedded software from different vantage points, which in turn dictates how they approach everything from design and characterization to verification.
For example, Mentor Graphics has a number of tools that run on top of the company’s Vista toolset, such as Sourcery CodeBench development environment, which then integrate with static code analysis tools from Synopsys’ Coverity, and others.
That approach is quite different from the one taken by LDRA, which makes software verification tools. LDRA comes at the issue from the standpoint of analyzing software across the development lifecycle and ensuring compliance with standards.
“Visibility and understanding of what code has and has not been executed, as well as which variables and control paths have been executed, provides the detailed insight and transparency often required by regulatory authorities when that application must meet a level of safety or security qualification or certification,” said Jim McElroy, vice president of marketing at LDRA. “Coverage analysis is a significant measure of how well the application has been tested. Certification/qualification efforts typically require that the code itself adheres to industry best practices in coding standards compliance for safety and security.”
But embedded software also can be viewed from the architectural level, and that offers an entirely different perspective.
“As soon as you start talking about embedded software, you’re talking about hardware-dependent software, and this is all about trying to get the design of the hardware and software as close together as you can,” said Drew Wingard, CTO of Sonics. “You want to get them close in time, and you’d like to get them close organizationally. The software that controls the hardware should be written together, and dividing up the chip into subsystems is a great way to facilitate that so that when you think of the subsystem, you think about it as part hardware and part software.”
Wingard noted that some of the companies that do best practices in this area are systems companies, which have the luxury of building a chip for a specific purpose. As such, they don’t need as many power modes or package options. They don’t have as many combinations of ‘this UART bonded out together with that USB port,’ and they have the opportunity to build simpler chips because they know what they are targeting. At the same time, they may be able to only include hardware features in which they are committed to writing the software to use, whereas a standard-product chip company has the pressure to find multiple customers for their chip and therefore must build something more complex.
“Then, if you jump into the microcontroller space — which is where a lot of the edge devices for IoT come from now — you find that the silicon company is responsible for writing an enormous amount of software,” he said. “These are multi-market products, and there is a question as to whether all the different modes of operation of the hardware are going to be supported in the software that’s being given away for free with the chip. In a lot of cases, the answer is no. “You end up with, ‘Well, we have the data book for the chip, and if the customer wants to turn on that other mode of the hardware, they are free to do it themselves.’ But now, [the customer] is far away from the hardware developer, and it’s difficult for them to figure out the side effects of using the new mode. Is it always risk free to use this mode?”
The answer is usually yes—but not always.
This is one of the big advantages of the subsystem model. It can encapsulate the right amount of software with the hardware.
“Typically, the processor that’s inside the subsystem isn’t expected to be programmed by anybody other than the subsystem provider,” Wingard said. “You get that same kind of tradeoff seen in the vertically integrated system company world, where you can build simpler hardware, you can build complete software, and you get to make sure they work inside a relatively closed system. That tends to generate higher-quality things that do their job better.”
For low-power designs, when the hardware and software people are close organizationally, it’s a lot easier to determine that something the software person might not think of as having power implications really does have power implications. The question is, when in the design process of the software do you figure that out? While power modeling may answer some of that, simulation, virtual platforms, and emulation are also in use today for this purpose — all with different initial investment levels.
Other approaches such as static code analysis tools have been growing in popularity from companies such as Coverity, Klocwork, Parasoft, Cast and others.
Verification laggard
Compared with other areas of verification, embedded software verification is a bit behind. , CEO of Imperas Software, pointed out that much of what was learned about verification in the hardware world happened years ago. The chip industry worked for years to move designers up in abstraction. A decade was spent developing simulation technology, and the last decade has been focused on verification technologies. “In the desktop software and the application software spaces, there’s also been evolution like that over many years. But the embedded software world has been a laggard of the verification technologies.”
Complicating matters is the fact that in certain markets, verification and the quality of software is taken very seriously. There are even certification requirements. “A lot of companies are getting involved in embedded software certification/verification because of that requirement. But the problem with certification is it requires quite a lot of extra effort in terms of the methodologies, the practices, to even get in the door [with a customer],” he said.
Yet if the end product needs to be certified, does the simulator require certification, as well–even though a simulator never ends up in an end product? “It’s not like a compiler where you convert your C code into an assembler, and the assembler goes into the product, so you need to know the C compiler is certified. Simulators, just like having a prototype to evolve [a design] on and develop it — never become part of the real product. It’s used in the development but not in the final product. There is still some confusion in [certain] industries,” Davidmann noted.
And then there is the issue of resources. Embedded software verification teams don’t always have the access they need to emulators and hardware prototypes. So while they may want to do more automated testing, regression testing, and verification, over the two or three years that they are writing their software they are adding more and more tests to it. This complicates verification.
“The embedded software world has the same verification challenges as the desktop, but it has the added dimension of complexity, which is that it needs a hardware type of prototype of some form because it’s not targeting x86 normally,” he said. “It’s targeting ARMs or MIPS or Renesas or ARC or Tensilica, or whatever it is. You can’t just run that on your desktop. You need real hardware or you need a simulator of that hardware. The embedded software world has this challenge of all the methodologies to develop it, but in the verification space, it has all of the challenges of the desktop guys have got, along with this other dimension that it’s cross-compiled and it needs a model of a system, and inputs and outputs, which are related to the embedded world.”
But how closely this should all be tied to the EDA world? And how much effort should be expended here? Answers vary.
Frank Schirrmeister, senior group director, product management in the System & Verification Group at Cadence, explained that the company is currently pursuing the portions of the embedded software verification market that are natural extensions to simulation, emulation and prototyping, like embedded software coverage of software running on processors executing in our verification tools. He said that while embedded software verification “has been, is, and always will be” an important topic for discussion from a technical perspective, it’s important to keep in mind that the cumulative design seat for embedded software is worth approximately $10,000, so the development investment on behalf of the tool provider may not even be able to be realized. Especially in light of all of the open source software available in this area, it’s quite a gamble for a commercial tool provider.
And while some of the software methods look familiar to the hardware side, tool providers have been hard pressed to determine how it fits together with the EDA world, and whether or not it even should, Schirrmeister said. Other areas, like compliance, are clearer. “For us, it’s a check mark to have tool compliance levels, so we have to invest the effort to get the certification. If I want to play in automotive or mil-aero, it’s a prerequisite. I need to make that investment. Otherwise, I’m not even invited to the party.”
Ultimately, that may be the determining factor in what drives better verification methodologies and tooling in the embedded software market—at least for applications where it is required. When customers demand a solution, the solution will come, as long as there is an acceptable business case for developing it. But at least in the short term, the motivation and the payback remain murky, and there isn’t agreement yet on when that will change.
Related Stories
Gaps In The Verification Flow
Part 3: Panelists discuss software verification, SystemC and future technologies that will help verification keep up.
Formal’s Roadmap
Part 2: The need for a formal specification, coverage and progress toward formal verification for analog.
Open Standards For Verification?
Pressure builds to provide common ways to use verification results for analysis and test purposes.
Leave a Reply