Systems & Design
SPONSOR BLOG

Virtual Prototyping Rocks

How to win over the embedded software developer, their customer and their boss.

popularity

By Nithya Ruff
Achim Nohl was taking a well-deserved vacation last week and asked me to be his guest blogger. To many of you who are regular readers of Achim’s blog, I am new to Synopsys and joined only a few weeks ago to manage the Virtual Prototyping product. I came from Wind River where I managed Embedded Linux product marketing. Having come from 20+ years of managing software of all kinds, I saw the guest blog as a perfect way to talk about a software marketeer’s view of virtual prototypes and embedded software development.

I still vividly remember the manual and tedious process of debugging the operating system I had written in my computer science OS 401 class. Not only was the data not all in one place, it involved reams of paper and tracing the calls manually. Frankly, it does not feel any different these days in the world of embedded software. Every element of the device stack is put together separately, and often there is no holistic view of the entire device. Debugging and testing is more of an art than a repeatable and systematic science. Let’s look at why this is so.

Embedded systems cover everything from very small systems inside watches, to sensors in a building, to complex multicore-based core-of-the-network systems. They share the characteristic of being dedicated to a specific function and not being general-purpose. These systems often require real-time performance and a smaller footprint. Everything is customized to fit the task—the Operating System (OS), middleware and applications. Given the wide range of applications, the choice of the hardware is specific to the device being created. The software stack created is influenced by the hardware characteristics and behaviors. Hence, there are all kinds of hardware architectures, and even many variations of custom hardware used in the embedded device space. In fact, hardware is the first and a key component of the decision making process for a project. The rest is built around the hardware selection.

Fig. 1: Embedded devices.

Going up from hardware, comes the OS layer. In the early days, most people built their own OS and applications. Their device was unique, special and purpose-built with specific needs and so the OS was customized for it. Over the years, some good Real Time Operating Systems (RTOSes) became accepted because of their deep understanding of the core needs in this space—footprint, boot-up time, real-time performance, etc. Developers then focused more on the middleware and application layer providing their differentiation and value-add at this level.

In the late 1990s and early 2000s the success of Linux in the server space began to bleed over into the embedded space, especially in networking. Linux was inexpensive to acquire and provided almost everything needed, such as tools, protocols and trained people. Most of all, it provided access to source code and could be modified as needed. Linux also was available on other architectures such as ARM, MIPS, and PPC and X86. This was important to embedded developers so they could select the right hardware for the task at hand. Chip vendors had started using Linux in their boot-up software and in SDKs due to its openness and broad appeal. They contributed changes upstream for their architecture and made sure architecture and chip-specific drivers and code were in the mainline kernel branches. They made it easier for the average embedded developer to use their chip sets. This adoption of Linux was just the beginning.

The next major development in the embedded space was the availability of software stacks such as Android. In 2007, when Android was released, the mobile handset software industry was fragmented with dozens of flavors of OSes and stacks to create mobile phones. Around that time, the launch of Apple’s iPhone had set the bar high for the smart phone market. Apple had managed the selection and integration process tightly from hardware, OS, apps, user interface (UI), manufacturing and even the service provider to deliver the phone. When introduced, the Android stack was another disruptive force in the smart phone market, making it possible to use a fully integrated stack aimed at mobile devices. It contained all the layers that phone OEMs in the past had to painfully put together. This enabled OEMS to focus on their differentiation at the UI and application layer to get to market faster. Seeing the Android example, other more complete stacks were created in the consumer space such as the set-top box stack and the gateway stack.

The embedded software development process historically has been a waterfall process. The chip is designed first with the OS bring-up and software development happening when the first silicon is available. Next, the semi enablement teams spend time to ensure that the tools, OSes and sample code actually work and create an SDK to ship to their ecosystems and early customers. By the time the average systems company can start developing devices it is a good six to nine months after the silicon has been released. There is also a good deal of duplication in effort on the software side between what the chip vendor does, the ecosystems and what tool companies do, as well as customers. There is delay and duplication in the value chain and no common tools, methodologies or processes used across the embedded commercial and open source ecosystems.

The increase in the complexity of SOCs and software involved in embedded designs, as well as the increased pressure to get differentiated products to market faster, makes it uncompetitive to do things serially, without some common practices, methodologies and an integrated hardware and software approach. More and more hardware companies are thinking of themselves as systems companies that need to take a user/application driven approach to designing their SOCs, and not the other way around. These innovative companies are creating integrated hardware-software teams that work together right from the start on a common design. They create a virtual prototype of the SOC as a model, which they share throughout the design process to collaborate and to ensure that not just the OS but any reference and sample applications and application software are also tested for application scenarios. Virtual Prototypes or software models of the hardware have become the common language that both teams can speak to debug and develop the platform.

Using such a common reference point avoids misunderstandings and creates a cohesive plan for the SOC. Being in software, these virtual platforms are instrumented for optimal debugging and visibility into the execution flow, providing a bird’s-eye view of a command and sequence the code is executed in. A more final prototype can be shared with lead customers months before the hardware is available, thereby winning sockets that would otherwise have been lost. The prototype also can be provided to the software enablement team and key field application engineers to get them ready for the chip launch. Often, the silicon enablement elements, such as tools and software are key criteria for OEMs in their selection of silicon. By the time the first silicon is available, the field is ready, lead customers have had a chance to evaluate the SOC and OS vendors are ready with their tools and support for the product. The impact on socket wins and cost of enablement is significant.

A common myth is that once hardware is available, the virtual prototype is no longer needed. Because the software-based prototype is easy to share, and provides more controlled debugging and wider visibility to trends, there are numerous other ways to leverage these prototypes. OEMs or systems companies can use this to extend and deepen their development, debug and test process. Just as the semi tools did, OEMs/Systems companies can use these prototypes as a common medium for debugging and communicating across distributed development teams.

Often hardware is hard to ship and can cause delay or be damaged in the delivery. The hardware complimentary virtual prototypes extend and enhance the software development process significantly. There is an increase in software integration complexity for OEMs because of the different sources of software. This makes regression and system testing very challenging. Virtual prototypes and the analysis tools that come with them provides the ability to test the whole stack with a holistic view of the flow, from the user interface down to the hardware. Imagine being able to see what the impact on power utilization is when a browser on a phone is opened—or the performance hit as a result of uploading a video to your Facebook account. All of these can help optimize applications and devices for power and performance. I wish I had these tools when I was debugging my OS in class.

So for a software geek like me, having the target hardware in a software layer is very attractive and makes it a valuable key element of the development process. It reduces the wait, provides more controls and empowers me to create a more integrated and complete device. I can see the possibilities and the transformation of embedded software development from looking at the problems with new eyes. I am won over by the benefits of virtual prototyping for embedded software development and see more of us using this to build smarter and cooler devices. I look forward to sharing more of my embedded software perspectives in the coming months as Achim’s guest blogger.


Tags:

Leave a Reply


(Note: This name will be displayed publicly)