Blurring The Lines On Prototyping

More granular approaches emerge for when to use physical and virtual prototyping, and frequently both.

popularity

Prototyping is an integral part of every SoC today, with two main approaches being used: virtual or software-based, and physical, which includes FPGA-based boards as well as hardware emulation systems.

Virtual prototyping is typically used for software development in the early stages of SoC design, even before SoC RTL is available, which is achieved by integrating models which can be either functional or cycle accurate.

A major advantage with virtual prototype is that multiple instances of virtual platforms can be made available to software teams in a short time at very low cost. But not everything is perfect yet.

“One of the disadvantages we see with current virtual prototype is the lack of support for peripheral interfaces,” said Sachin Ghadi, prototyping team lead at Open-Silicon. “There are options of semi-hosting which utilizes the interfaces of a PC like USB, but options are limited and at best is only an emulation of the actual interface.”

On the other side, hardware emulation allow SoC RTL that is still being developed to be taken to the emulation stage very quickly, providing debug visibility similar to RTL simulator while executing faster and allowing hardware-software co-development. With appropriate speed bridges, real devices can be connected to the design for testing system level use-cases. But the biggest hurdle is cost of deployment. “FPGA prototyping has the advantage of cost and speed when compared to hardware emulation. Debugging is challenging and so is logic fitment, which requires design partitioning. But the newer FPGA boards offer much more logic real estate, multi board connectivity options, and better debugging features which help circumvent these issues,” Ghadi explained.

Now, there are even hybrid approaches where the virtual and physical worlds can be combined to get the best of both.

For example, there is a virtual-plus-FPGA hybrid, where the CPU subsystem is in virtual form and rest of SoC is on FPGAs which interfaces with actual devices. There is also a form of hybrid prototyping in which the RTL simulator is used along with the hardware prototype. The reasons for using hybrid approaches can be due to non-availability of IP or a subsystem, faster execution of certain parts of SoC, need for more debug visibility or the need to interface with real world signals.

Ghadi noted that while virtual prototyping is the ‘younger kid,’ it already has claimed its place in the SoC design cycle, where it is used primarily for architecture analysis and early software development. “From a software development perspective, it can be seen as complementary to FPGA prototyping or hardware emulator, where firmware and parts of the software stack can be developed on virtual platform much before a physical prototype is available. Once the physical prototype is ready, the software can be matured pretty quickly.”

Jon McDonald, technical marketing engineer for the design and creation business at Mentor Graphics, believes the number of prototypes is driven by a relationship between the complexity of creating the prototype and the runtime execution performance delivered by the prototype.

“If we take the two endpoints to be the final product, one being very high complexity and very high effort to create, and the other being the most abstract executable description possible, this would be minimum critical function, minimal complexity, minimal effort to create. Then we can start to fill in the prototype models we see today and see if they fill in all the design points,” he said.

McDonald sees the following, based on a recent blog he wrote:

Screen Shot 2015-04-20 at 8.02.06 PM
Source: Mentor Graphics

He noted that the levels of prototypes available cover the spectrum reasonably completely from the very abstract to the very detailed. “At this point there is a complete set of prototype abstraction levels. I don’t see any drive for additional levels of abstraction. The one thing that would drive new levels of prototypes would result from a new way of covering one of the levels with better performance or improved complexity, such as we see with the similarity in performance of emulation and TLM AT, but they differ significantly in the complexity of creation.”

As far as mixing levels of abstraction, McDonald added that most users are employing more than one level and generally mixing models from the different levels. Depending on the design point and models available, it is fairly common to have different level models in a single virtual prototype.

Taking a slightly different tack, Mick Posner, director of product marketing for FPGA-based prototyping solutions at Synopsys, sees not two but three main prototyping segments—virtual prototyping for architecture design and analysis rather than static spreadsheets, as well as both virtual and physical (FPGA-based) prototyping for software development and system validation.

“Each has its usage model sweet spots and can be mixed with other prototyping technologies to extend the platform’s capabilities,” Posner said. “In a traditional flow, architecture design and analysis would start first, many months, sometimes years, before RTL coding is started. The results of the architecture design phase feed the virtual prototyping phase to kick-start software development. Continuing through the flow, the results of the virtual prototype kick-start the hardware (RTL)/software integration effort, which is executed on an FPGA-based physical prototype. While there is a usage flow among prototyping methods, there is no standard handoff point between using one tool over another, and typically there are overlapping use modes.

Posner pointed out that today it is rare for a design to be developed completely from scratch because models, blocks, IP and system components exist at the start of the project, which blurs the lines between the prototyping technologies even more. A blurring of the use models increased with the entry of hybrid prototyping (as mentioned above) that allowed for virtual prototyping and FPGA-based prototyping to be seamlessly connected.

Still, the main usage modes sweet spots for architecture design and analysis are multi-core SoC hardware/software partitioning and early analysis of SoC performance and power.

“This provides architects with a dynamic simulation alternative to static spreadsheet calculations long before RTL is available,” said Posner. “Virtual prototyping for the software team is tailored toward early software development and offers a pre-RTL development platform with deep software visibility, high speed execution and modeled interfaces in an easy-to-deploy package, which is easy for large size software development teams to consume. FPGA-based prototypes use the actual SoC’s RTL and offer the highest fidelity and accuracy compared to the final chip.”

FPGA-based prototypes run at speeds of between 10 to 100 MHz on average, which allows design teams to confirm interoperability with other devices using interfaces. “Hybrid prototyping allows users to mix the virtual and FPGA-based prototypes, meaning that a prototype incorporating RTL can be made available earlier in the design cycle utilizing a higher level representation of missing blocks or a CPU subsystem representation,” said Posner. “Hybrid prototyping does not limit the performance of the FPGA-based prototyping platform meaning physical interfaces can continue to be used to stimulate the design with real world activity.”

Frank Schirrmeister, group director for product management and marketing at Cadence, said the faces of prototyping are definitely mixed, and it all depends on the use cases the designer is applying to the prototype. He also suggested that physical prototyping is taking over virtual prototyping in many ways.

“More accuracy is offered today, and that has spawned the hybrid usage of TLM with RTL execution engines,” Schirrmeister said. “Software Development Kits (SDKs) and OS simulators have taken on much more of the ‘pure software development’ that was in done in ‘full system virtual prototypes’ in the late ’90s and the first decade of the 21st century. The iOS and Android development kits, while not accurately reflecting the hardware in many ways, have taken on more and more effects like power management to higher levels. As a result, the era of the ‘full virtual prototypes’ — such as TI OMAP, Intel XSCALE and other public examples — seems to be coming to an end, and we hear much less about these examples publicly. Instead, pure software development that doesn’t require hardware accuracy has moved to the left into the SDKs category, and virtual prototyping platforms are much more connected to the RTL in simulation, emulation or FPGA to more accurately reflect the underlying hardware.”

Further, a lot of prototyping becomes application-specific, too. In some cases, such as automotive, virtual prototyping is especially applicable for two reasons, Schirrmeister continued. “First, the ‘bench,’ the physical prototype, tends to be pretty expensive, so virtual prototypes scale better. Second, when it comes to running to extreme cases, it may be hard to do so without jeopardizing the safety of the drivers. I have seen cases in which a virtual prototype could be done (like testing abnormally fast acceleration), which would sacrifice a test driver’s life.”

Mentor Graphics’ McDonald agreed that virtual prototypes will replace physical prototypes and pointed out that at some levels of abstraction this already is happening. “I have seen customers who build design spins into their process. The early spins are really intended to be physical prototypes, and in some cases they have been able to eliminate a spin essentially eliminating a physical prototype by successfully applying a virtual prototype to verify the implementation. From what I see today, the virtual prototypes provide excellent visibility and control for analysis, as well as allowing earlier software development and hardware software integration. The virtual prototype can provide a very high degree of confidence that the design is correct and complete prior to availability of a physical prototype. If we can address all of the issues that would be addressed with a physical prototype on the virtual prototype then there is no need for a physical prototype.”

To Kurt Shuler, vice president of marketing at Arteris, “It’s a tradeoff of speed, money and the pain of partitioning.”

He likewise sees virtual prototyping starting to overtake physical prototyping in some instances. “At Arteris, we use Carbonized models a lot — so we are taking RTL and we’re converting it into a cycle-accurate SystemC model and running that. We do that in addition to VCS type RTL simulation. The Carbonized models are good from the software side of things — running software tests — and they run pretty fast. That’s one way to use less emulation. You’re always going to use emulation but you try to only use it where you need it. The good thing about the prototyping of software instead of hardware is the amount of visibility you get. People don’t realize how important that is until they roll up their sleeves and get into it.”

But when you look at virtual and physical prototyping, it is reasonable to conclude these technologies will coexist and be leveraged in the ways that make the most sense. “It’s like nails and screws. Sometimes it makes sense to use a hammer and a nail, and sometimes it makes sense to use a screw and a screwdriver. They both hold two things together but are used for different purposes. In addition, there are cost differences and tooling differences that must be considered,” Shuler said.

Posner maintains there are overlapping use modes for each of the prototyping methods, but no one method can entirely replace another.

“Virtual prototyping will never replace FPGA-based physical prototypes, as FPGA-based prototypes deliver RTL design fidelity and accuracy so that the software developed can be verified and validated against the actual SoC hardware representation,” he said. “There is no one size fits all prototyping tool for all use cases, but rather sweet spots for each and plenty of opportunity for the various prototyping methods to work together to accelerate architecture design, software development, hardware/software integration and system validation.”



Leave a Reply


(Note: This name will be displayed publicly)