Is Software Necessary?

Hardware has a love-hate relationship with software, especially when it comes to system-level verification. When is software required, and when does it get in the way?

popularity

Hardware must be capable of running any software. While that might have been a good mantra when chips were relatively simple, it becomes an impossible verification task when dealing with SoCs that contain dozens of deeply embedded processors. When does it become necessary to use production software and what problems can that get you into?

When verification targets such as power are added, it becomes a lot more complicated as one phone modem vendor found out a few years back. Its chip experienced overheating problems in one phone but operated quite satisfactorily in a rival’s phone. Was this a flaw in the verification strategy and avoidable, or is this kind of event likely to become more common?

A key question here is how much software needs to be executed pre-silicon? “Software has a defined effect on hardware,” points out Russell Klein, technical director for Mentor, a Siemens Business. “Specifically, software drives a sequence of transactions into the hardware. In any given verification, some of those transactions are going to be important, but many of them are superfluous.”

The trick is working out what can be ignored.

Types of software
Not all software is created equal. “Everything relies on software, firmware or middleware,” says Rupert Baines, CEO for UltraSoC. “Almost always software will be running on multiple devices, quite often devices of different architectures or interacting in very difficult, complex, unpredictable ways.”

But there are other ways to look at the functionality provided by software. “Application-level software directs traffic around a system at a high level, such as RTOS-based application software,” says Gordon Allan, product manager at Mentor. “At the lowest level, such as a DSP codec way down in the pipeline, there is really nothing going on without this software. Then there is an intermediate level that I call structural software where it is intrinsic to the operation. Nothing happens without that software and it is hard to replace with a different representation because it is not algorithmic in nature. It is more like logic that happens to be implemented in software.”

Klein adds some examples of this intermediate level. “There are commonly power management processors, security processors, and a variety of other deeply embedded processors in an SoC. These processors are not accessible to the end user of the device. Any software running on them is written by the developer of the SoC and needs to be included in any significant system-level simulation.”

System-level simulation with some software is a necessity. “To facilitate hardware/software co-validation, two important ingredients are necessary,” says Roland Jancke, head of our department for design methodology at Fraunhofer EAS. “First, we need an abstract representation of the system and its components to be fast enough for the complexity. And then we need a language that supports the description of software as well as hardware components to cover both views in one simulation model.”

How much software?
The amount of software required depends on the verification task. “Are we looking for bugs in the RTL or are we verifying the firmware?” asks Allan. “Are we verifying the integration of two reused blocks? Are we verifying some aspects of power? So long as we ask those questions and know the possibilities in terms of constructing a verification solution, we can create an optimized system. As soon as we blur the edges, we are suddenly verifying a full stack of application software and we need to buy more emulators.”

And you can’t afford to avoid firmware verification. “We have very good tools for working on the hardware, and it’s getting to be quite unusual to have a chip fail because of a wiring or logic problem,” says Baines. “However, increasingly we hear about products that three, six or nine months late because of firmware issues.”

But firmware can change. Does that mean we have to redo all of the associated verification? Perhaps not, but you have to have confidence in the lower levels and that means we cannot forget about a solid hierarchical verification foundation.

“We can start with simulation and move to emulation and end up with a lot of confidence in the lower layers,” says Baines. “Then we can move up a level of abstraction and stop thinking about absolute timing and signals and race conditions, and start thinking about transactions and system level while moving into emulation and then prototyping. It does become viable to run a complete software stack of something—not at real speed—a savage fraction of real speed. But it is real software, and that gives you a huge head start on the verification and debugging process. Because it is not in real time, it means there will be a lot of issues that you do not see, but you do get a lot of confidence.”

And there are times when you can get rid of the software and the processor entirely. “We have to ask the question, ‘How can I verify that as cheaply as possible?'” says Allan. “If that means swapping out the processor for this piece of the verification, and replacing it with a BFM—or replacing the UVM testbench with a datafile that was captured from a high-level synthesis (HLS) flow—then that is what should be done. By cheaply, I mean get to the best quality in the shortest time with the least resources and the least chance of bugs escaping, while hitting the market window.”

Algorithmic verification
Following Allan’s classification, there are three types of software—algorithmic, structural and application. The strategy for each is different. “The algorithmic part, such as software for the DSP codec, is an interesting category because there are flows and tools associated with it. These allow you to develop high-level algorithms, validate their behavior, and then map that into the RTL.”

But first you have to verify the infrastructure on which that software will execute. “At the unit test level, it is probably undesirable to bring in the actual processor, memory and programs to the mix,” says Klein. “It is very hard to execute a stream of instructions on a processor to drive a precisely timed set of transactions on a bus. This is needed for a lot of verification at the block level, so a bus transactor is usually going to be a more appropriate tool early in the verification cycle.”

This enables the notion of exhaustive verification to be performed. “With that said, software always exercises verification corners that are missed by even the most rigorous verification methodologies,” warns Klein. “So as the project gets past the unit and subsystem level, more and more software should be brought into the mix and realistic use cases should be exercised.”

Another way to perform verification at this level is with the newly released Portable Stimulus Standard (PSS). Instead of running production software, it can generate arbitrary software to perform the verification. “A codec is going to be able to do a decode,” says Larry Melling, director for product management and marketing at Cadence. “That decode operation will require these inputs and produce this output and it needs to utilize these resources. You describe that behavior in PSS. When you go to generate the test, the execution of that action can be done in a coreless environment, which is a sequence of transactions that would be put onto that bus to represent the codec transaction, or it could be an actual set of software that is going to be loaded onto the processor. Being able to have multiple representations of that abstract behavior allows you to generate a test that can perform the same functionality with different levels of abstraction (coreless, or with core).”

Another example of this type of verification happens when embedded FPGA resources are present. First, the fabric has to be verified. Then the application being loaded into the fabric has to verified. “Once we have verified the fabric, we almost want to see past the fabric as if it didn’t exist anymore,” says Allan. “The only thing that matters is the intended logic so that we can verify real concerns.”

Yoan Dupret, managing director of Menta, agrees. “Verification of the RTL application is the same as their standard flow for RTL verification. When they want to verify the application running on the eFPGA they can run their preferred formal verification tool on the eFPGA with its bitstream configured and compare it to the original RTL application. These application tests are not formally ensuring that any application would work, but it certainly is a mandatory step, and we can lower the risk as much as we can by increasing our number of tests.”

Fig. 1 – Removing the need to verify the application on the core. Source: Menta.

The same applies when using high-level synthesis (HLS). If the output of the tool has been formally verified to be equivalent to the input, it can be used for most verification.

SystemC is a language often used to drive to HLS, and it has other benefits when it comes to system verification. “SystemC is specifically suited for these requirements because it is capable of being close enough to hardware for specific parts that need bit-true modeling, or may even account for analog transmission losses or impedance mismatch with the AMS extension,” explains Fraunhofer’s Jancke. “At the same time, it is abstract enough to cover bus communication between processors, memories and interfaces using the TLM (transaction-level modeling) abstraction. In addition, SystemC can seamlessly integrate firmware that has been written in C or C++.”

However, there is one aspect of a codec or eFPGA that does require careful verification. “Its content can be changed at any arbitrary point, and from a functional point of view that will be a mode change,” warns Baines. “You have to worry about timing – about when you reprogram it. It could have a transition period, and you have to ensure things switch over nicely.”

Done wrong, this can create problems. “There will be some mechanism to change the behavior of those algorithms according to some configurations, or even to swap in and out different pieces of firmware that run on those codecs,” says Allan. “Someone has to verify the breadth of the configuration space and the mechanism by which configuration is changed. PSS can have a role to play here.”

You might be inclined to think that neural networks would fall into this category, as well. “FPGAs are well behaved, well understood, and the program and development paradigm is nice,” says Baines.”But when you get into CNN, deep learning, weights, all bets are off. Nobody quite understands how they behave, how to debug them, how to model them.”

Structural and integration verification
When multiple blocks are integrated together, the notions of verification change. “Somewhere in between block verification and full SoC verification, you need to ensure the SoC is connected correctly and that basic functionality spread between blocks operates correctly—essentially making sure the system infrastructure behaves as expected,” says Dave Kelf, CMO for Breker Verification Systems. “Examples include cache coherency, power domain switching, and processor interaction with dedicated blocks.”

The verification focus changes with that. “You are less interested in the execution cycles of the codec and more interested to see if the codec is blasting data onto this bus, while you are doing other things,” says Melling. “Is my subsystem going to continue working? It is a vertical reuse problem, where at this stage of your integration you are less interested in consuming a bunch of simulation cycles to execute software on a processor, and more interested in executing traffic that represents those behaviors.”

This means that you may replace the processor and perform host code execution.

“Once you get into that level of abstraction, the reuse premise works,” continues Melling. “That functionality maintains itself as you move up through the levels of integration. It only breaks when you shift the verification goal.”

Application verification
Integration continues until the entire SoC is assembled. Most devices will have one or more applications processors, which is where the end-user software is likely to be loaded. Now the verification strategy changes because the SoC developer may not have access to the real application software.

This may mean that they have to guess what the software may look like. “Apple and Samsung can verify their cell phone application processors without ever having to run ‘Angry Birds’ on it,” says Klein. “This happy circumstance comes about from the fact that application software is isolated from the underlying hardware by operating systems and drivers.”

But as mentioned above, cell phone modem chips have been dropped from products because of thermal issues. “Today there is a growing push for power analysis using real workloads,” says Preeti Gupta, director of product management at ANSYS. “It appears that the modem vendor did a fine job looking at contrived workloads with some real use cases, but maybe the one phone vendor put the chip in a particular traffic pattern and that created a thermal issue, whereas the other phone did not expose that. From a technology standpoint, the benefits of doing real use-case scenarios are obvious. They expose real data traffic.”

Others agree. “Whenever there is an interaction between hardware and software, you really need to understand how they interact in real silicon, and you need to have monitoring systems built in to do that,” says Baines. “It must be able to look in two dimensions. First, it needs to be able to look across the chip at all of the different parts, because the subsystems could interacting. Then it needs to be able to look from top to bottom and to have one vision that understands everything from the hardware transactions and latency, delays, bandwidth, throughput, right up to what is happening in the application layer software. When the application that is running on the chip is updated, how much does it affect things? And how will I monitor that and test that it is working correctly?”

The debate between real and contrived use cases is ongoing. “There is still a need for dedicated traffic patterns,” says Frank Schirrmeister, senior group director for product management and marketing at Cadence. “You can just execute the software as is, but will that represent any of the corner cases that you are interested in? Probably not. It may provide the typical case, but not the corner cases. You try to capture atypical traffic, as well. But you don’t always have to execute the actual software, which also may not be reproducible. Getting a system back into the same state is not always easy. As an alternative you can capture the characteristics of traffic patterns and reapply them for verification.”

Even with the real software, scenarios have to be generated. “The challenges are how do you generate real use-case scenarios and how many do you generate?” asks Gupta. “When have you done enough? They tend to take a long time to generate, they eat up a lot of disk space and what about the downstream tools that will consume them?”

It requires planning, too. “Verifying with real software means that you have real-world project planning happening, and a team may be reluctant to develop software that is highly flexible and morphable,” says Allan. “They want to get their job done and deliver a project. If you introduce the element of configurable or programmable designs into the equation, either for this release of the product or for a future family, then you have to think ahead. PSS helps because it can think beyond the serialization of one implementation of software and we can think in parallel. It allows us to think of all of the things that the software could do to the hardware and enumerate those using the PSS methodology. Then you let the tool go wild creating stimulus, finding the corners, cross covering that with other aspects and other interfaces.”

PSS also may lend itself to an intermediate solution. “One example of this is the Hardware Software Interface (HSI) that provides a range of system services to help with early SoC platform validation,” says Kelf. “Using HSI to drive memory allocation in a similar fashion to the final operating system allows tests to be written that exercise the codec reading and writing into a processor subsystem without the entire operating system. Once this function is tested, together with other aspects of the infrastructure, a level of confidence is achieved that the SoC will operate correctly with the full system software. At this level, a range of scenarios may be validated effectively.”

Conclusion
The creation of larger and more complex systems is bringing a new focus to system-level verification and that is intimately entwined with software. “Can we verify a system without software?” asks Allan. “Absolutely, and in fact we must. In some cases, we must remove software from the equation because we are not verifying the software, we are verifying something else. In other cases, we must have the software because we are verifying the software or the integration between a firmware API and the hardware. It is more a question of MUST rather than CAN.”



Leave a Reply


(Note: This name will be displayed publicly)