Hardware Models For Software

Where the challenges are in early software development.

popularity

Shift left, while a relatively new term, has become important in all parts of the SoC design flow, but its impacts are wide ranging and many still ill defined. It basically means that tasks have to be started earlier than in the past because more accuracy is required from tasks that are further down in the flow in order to make better predictions. It also implies that more steps are performed concurrently.

By far the biggest impact of shift left is the ability to get software involved earlier than in the past. Previously, the industry came up with solutions that included emulation and FPGA prototyping. These have served a very useful purpose, and continue to do so today, but a newer one could have the biggest impact. That is the development of the virtual prototype. This enables software to be run against an early model of the hardware and at the stage in design where software has the ability to impact the hardware in meaningful ways.

But there are challenges, too, some of which the industry has been working to overcome and others that create concern. In an age when productivity is everything, getting software involved early is paramount.

“All three technologies—virtual prototypes, , and FPGA prototypes—claim to enable early software development and integration,” says , chief executive officer for . “Virtual prototypes, or virtual platforms, are the most useful in parallelizing hardware and software development because they are available at the earliest point in the development flow, and also are the fastest for pure software testing.”

It is important that the expectations are set for the abstraction of the model that is to be used. “You can use traffic from an emulator to try and get a for performance or power,” says Drew Wingard, chief technology officer at Sonics, “but by the time you are running on an emulator you are pretty far down the path of design. You have an architecture at that point and many decisions have been made.”

But the usage of emulation and FPGA prototypes remains important. “We do not have a choice when it comes to looking at software running real world scenarios,” points out Krishna Balachandran, product management director at Cadence. “Without the implementation details being modeled, for example power estimation or performance estimation at the system-level is likely to be inaccurate. Considering some of those effects up-front is also important so it cannot be an either/or. You have to do both.”

The industry is still coming to grips with new methodologies and flows that are enabled by the application of new abstraction. “Abstraction allows aspects of the design to be described in an executable form much earlier in the flow,” adds Tom DeSchutter, senior manager for product marketing at Synopsys. “This allows new kinds of analysis to be performed on it, such as performance evaluation, power analysis and much earlier development and integration of software. The abstraction comes from using transaction-level modeling and languages such as SystemC. Execution times are often orders of magnitude faster than RTL simulation and significantly faster and cheaper than emulation. What it lacks in fidelity is, for most of the time, not important.”

Virtual prototypes are highly valuable in the system development process. “From initial IP configuration to pre-silicon software development, they provide a crucial mixture of availability and visibility,” says , director models technology in the Development Solutions Group of ARM. “In order to enable this, you need the right abstraction level for your models. Cycle accurate models compiled directly from the implementation RTL enable system optimization and firmware development whereas programmer’s view models enable the software development to start work well in advance of silicon. The key is having the technology to enable a seamless migration from one abstraction to the other and even mixing abstractions in certain cases.”

And if you thought that only one abstraction can be used at a time, think again. “We see quite a bit of interest mixing emulation with virtual prototypes,” says Jon McDonald, technical marketing engineer for the design and creation business at Mentor Graphics. “It is a continuum from using software stub code running on a native host with no model of the hardware, to having a virtual platform of the hardware that is a more accurate architectural representation of the hardware, to having the actual RTL that you are running in an emulator or FPGA prototype. They want an architectural model that represents the processor sub-system and then they want to be able to tie in their unique content, be it at the transaction level with an abstract model or at the RT level. Most customers look at it and realize that at different times in the design process they will use both the high-level model running at the transaction level and an emulation or hybrid mix to really verify that the software is doing what they expect and performing the way they expect it to.”

In a time when we are seeing software-driven techniques being considered for hardware verification, we have to be certain to not assume that this is the one and only right solution for all tasks. “It all comes down to choosing the right workloads, and users are often better advised to use abstracted, statistical representations of the workloads because the real workloads may take too long to run,” says Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “For power and performance analysis in the context of software, emulation offers the ability to take real software workloads and derive the activity information that can be fed into an RTL power analysis tools. Identifying hotspots is a key issue, and users are using less accurate, more abstract models to identify the points of interest, then re-run more detailed analysis within those.”

This has provided much of the impetus for hybrid emulation. “Hybrid emulation enables both software and hardware teams to work on the latest version of the project without actually having the whole SoC finished—or even in quite early stages of the project,” said Zibi Zalewski, general manager of the Hardware Division at Aldec. “While some portion of the design is still virtual and not implementable to an emulator (a high-level model in a virtual platform) together with the available part (RTL code in the emulator), it provides the whole SoC for early and synchronized testing by both teams.”

He added that while this was possible in the past, it typically relied on one team waiting for the other to finish. “Separation was causing module-level testing, instead of SoC-level testing when hybrid emulation is enabled. To actually do the SoC-level testing in the emulator you need to have the whole design ready and hardware implementable. To me this is the main benefit of going hybrid.”

Impact of software development
In the past, software teams had a few ways to perform early development. They would often create stub code that would mimic some aspect of the hardware, they could use a previous generation of product with a hardware mockup of the new capability, or they would wait for the new hardware to become available. None of these solutions is ideal.

Stub code is fast, but it does have limitations. “One of the problems with stub code is that software developers create it for themselves and it is decoupled from what is really being done in the hardware,” explains McDonald. “While the stub code can be useful in helping the software developers ensure that their software is internally consistent, it doesn’t do anything in terms of verifying the software accuracy in terms of the actual hardware, and it doesn’t do anything in terms of performance characterization. These are significant limitations.”

Few companies can afford to wait until first silicon before getting started on software bring-up and yet it remains an important vehicle. “Today, we still develop most software on actual hardware once it is available,” says Schirrmeister. “Real hardware executes at the intended speed, which is great, but debug is not so great because it is hard to look into the hardware. Most importantly, this type of software development becomes available at the latest possible point in time when changes in the hardware require a redesign. Now, it is a race to bring out early representations of the hardware on which software can be verified earlier.” And the implications also go back in the other direction. “Seeing the OS boot on the RTL has become a requirement for the tape out to be done. Continuous integration is the name of the game.”

Time to market is increasingly critical for many products and companies try to pull everything earlier. First it was at the RT level using emulation and FPGA prototyping, but the drive for earlier availability continues. “We have seen times when architectural models are created before the architecture is finalized,” says McDonald. “These models get refined as the architecture is refined.” McDonald provides an example within Altera, where a virtual prototype was created for some of its FPGA parts. “An early model of the Stratix 10 was created before the architecture was finalized. The purpose was to allow software developers to start developing the software even though the architecture was not fully defined.”

But this is not the norm today. There are other situations where hardware and software are more pipelined in their development processes. “Here the software guys are not ready to start their development early because they are still working on the previous generation,” says McDonald. “They do not have the same kind of pressure to have the software team start so early, and they will wait until the hardware architecture has been locked down. The verification flows then keep it and the implementation in sync.”

Who creates the models?
For a considerable time, virtual prototypes were held back by the lack of models. While models existed for the processor and were created for new blocks, re-used blocks prevented a complete prototype from being put together. Today, most of those issues are behind us, but prototype assembly still needs to be performed.

“We see software teams doing the model building for virtual platforms,” observes Davidmann. “This is due to their vested interest in the virtual platforms for their software engineering tasks, and also because hardware engineers tend to add hardware details to the models that have no impact on the software and yet slow done the performance of the virtual platform.”

But this is not the case with all teams and varies with different parts of the prototype. “For most embedded systems the developers are not creating the processor sub-system themselves,” says McDonald. “These models are being provided by the third-party IP vendors. But they cannot provide the models for the unique hardware in each design. “For the unique hardware content we are seeing the hardware team stepping up and taking the responsibility for creating the models for software.”

Pre-built virtual prototypes are a great way to accelerate the availability of virtual prototype solutions. “Instead of requiring the end user to develop a virtual system from scratch, models such as ARM’s CPAKs or FVPs have a known good configuration and software to enable immediate productivity,” says Neifert. “We’ve found that this approach enables users to spend far less time developing virtual prototypes and more time actually getting value from them.”

It all comes down to the pressure on the stakeholders. “It is not expected that the creation of a virtual prototype will lead to faster hardware development,” says Randy Smith, vice president of marketing for Sonics. “The software development teams are in favor of it because the sooner they can run on something more closely matching the actual hardware, the more comfortable they feel. It helps because it keeps the software guys off your back about your SoC schedule.”

Another reason the hardware team has for creating the virtual prototype is the minimization of overdesign. “The virtual prototype will not eliminate overdesign,” points out DeSchutter, “but it may enable the team to have higher confidence in the worst case scenarios with minimum overdesign and hence a more cost effective product.”

Keeping everything synchronized
One of the biggest issues with providing prototypes to the software team comes down to synchronization. This exists at two levels. The first is physically synchronizing the prototype. The second is dealing with differences between various abstractions of a prototype. “We have seen logistical issues, which are driving virtual platform creation,” says McDonald. “How many FPGA prototypes can be made, how quickly can they be changed and updated, how do you distribute them to distributed development teams and keep them in sync? When you start talking about physical prototypes and distributing them around the world, it can be very difficult to keep the software developers current with what is going on in the hardware.”

With a virtual platform, it is much easier to ensure that software engineers will always be running on the latest model of the architecture. Now they face the problem that their software may not have the same functionality in the models. “It is important to bear in mind that the different engines are used to answer different questions,” points out Schirrmeister. “So the switch from engine to engine actually does not necessarily involve the same people, and it is not always a situation in which the same images are run on the next engine unless a different question is asked.”

But confidence in the models may remain. “That is a big question and is one reason why companies are trying to put in place verification flows that allow them to identify early on where their model representations diverge,” explains McDonald. “One thing that can be done is to verify the transaction-level model and architecture that the software developers are using to develop their code, against the implementation models. This is done by linking the transaction models into a UVM environment, and this helps the software engineer gain confidence in the model they are running as being an accurate representation of what the hardware will be. They are different levels of abstraction and so there will be points at which they diverge, but the important thing is that you need to be able to identify the divergence early so that software developers can identify if that is a significant difference that needs to be accounted for, or if it will not affect the end system performance or results.”



1 comments

garydpdx says:

One of the advantages of virtual prototypes is not just early access, as Simon Davidmann pointed out, but also you can proliferate as many of them to developers as needed (restricted only by license numbers, if any). The only limit is speed of simulating the VP on a laptop or desktop computer but at Space Codesign, we have found that C/C++ models even with loose or approximate timing can run quite fast (we have videos up on our YouTube channel) for refining your code with a minimum of frustration.

Leave a Reply


(Note: This name will be displayed publicly)