The Seven Layers Of Hardware-Software Debug

Dependencies and the blurring of the lines makes this a work in progress, but at least we’re heading in the right direction.

popularity

By Frank Schirrmeister

Seven Layers of Hardware/Software Debug

Seven Layers of Hardware/Software Debug

Of course I will be in trouble once this blog is posted. This post is about hardware/software debug,  and I tried to layer a set of different levels for the scope and applicability of debug. I counted seven layers, but I am sure that one may be able to arrive at a different numbers of layers of debug depending on one’s counting.

The layers I counted are illustrated in the graphic associated with this post. It shows a multi-core chip as it could be used as an application processor for phones or tables. The chip features multiple processor cores – including ARM big.LITTLE – and features customer specific blocks for graphics and application acceleration, high-speed, low-speed and general peripherals.

All of the blocks are connected through a hierarchy of SoC interconnect fabrics. The chip itself is then connected to real world interfaces on a printed circuit board, connecting to cellular modems, audio, video and other interfaces. Complex software is running on several of the components,. oOf course this includes the processor sub-systems, but complex software is likely also running on the accelerators for graphics (think CUDA programming models) as well as application accelerators for multimedia applications, not to mention digital signal processors in the system.

So how does one go about debug, verification and validation of such a system?

At the top of the graphic I illustrated some of the engines used for verification, validation and the actual hardware/ software debug. Designers will use transaction-level simulation in Virtual Prototypes,  and use RTL simulation for verification. Acceleration and emulation also operate at the RT-Level, as does FPGA based prototyping. All three technologies offer different advantages regarding bring-up speed, bug turnaround and bring-up time, as well as the actual execution speed and debug insight into the hardware.

At the end the actual chip will be used in development boards to allow final validation of software and in-system-environmental operation. Now the user operates with “the real deal” at the actual target speed, but only a limited set of debug capabilities is typically embedded in silicon and exposed to the user. All these engines, from TLM to RTL execution and the actual chip, have one unifying property – they generate the set of information necessary to enable debug, verification and validation.

So what do the layers of debug look like?

At the lowest level the actual hardware itself needs to be verified and debugged. This can happen at the scope of individual IP blocks, sub-systems, and the actual system-on-chip (SoC) as well as at the scope of the SoC within its system context.

Similar to pure hardware debug, the bare metal software executing at the lowest level of abstraction needs to be functionally verified. It needs the hardware it executes on to be modeled in sufficient detail, but the main intent at that level is to validate the software functionality and its interaction with the hardware.

While the debug at the lowest levels is focused on hardware and software functionality, respectively, the more complex the device under verification becomes—and the more important software will become as part of the debug as it influences the interaction.

When operating systems (OSes) like Android, Linux or Windows Mobile come to play, the scope of debug and validation again changes. Now the debug has to happen in an OS-aware way, i.e. the software will use specific functions and APIs as they are provided by the OS.

Given the complexity of designs, integration itself becomes an item to be debugged and validated. Are the 100s of IP blocks connected correctly? The dependencies and relationships between functions – the sequence of execution – become important and need to be validated. Has the initialization been done in the right order? Is the bring-up of one peripheral dependent on another one being initialized first?

Again, validation of key parameters like performance and power needs to move one level upwards. Are portions of the design switched off at the right time to optimize power consumption? Is there enough memory bandwidth?

Finally, at the highest level, the application scenarios become important. These are the use cases we users see and expect to work. My daughter’s use of her iPod touch may consist during a day of 70% standby, 15% videos on YouTube, 10% games and 5% texting with her friends. My use of my iPhone will be very different—and both her usage and my usage need to be translated into application scenarios which need to be validated.

Why is all this important and I am writing about this? Each layer has its own target user group, and the dependencies of hardware on software and vice versa do change significantly. However, with increasing complexity the lines become blurry and given the overall unbound nature of verification, decisions on where and how much to verify, debug and validate has significant influence on when a design team is “done.”  In an ideal world all of this would work under one Integrated Development Environment (IDE) to allow efficient exchange of information between the different user groups.

Are we there yet?  Far from it … Users are facing a large number of different debug and verification environment, often disconnected. There is huge room for innovation here!

—Frank Schirrmeister is group director for product marketing of the System Development Suite at Cadence.



Leave a Reply


(Note: This name will be displayed publicly)