The Complexity Of System Development And Verification

Seven major trends, four levels of scope, five main verification use models and seven execution model: Design is getting very complicated.


By Frank Schirrmeister
The electronics industry is undergoing a fast transition towards new paradigms for system development and verification as traditional development methods reach their breaking points. Developing a system development and verification environment can become a costly undertaking, and can involve many direct and sometimes even more hidden cost. To understand the cost aspects, one needs first to understand the underlying market drivers as well the complexity of a system development verification environment.

While I have started touching on some of the cost of ownership aspects around bring-up time in a recent post, it turns out that by the time I am summarizing the complexity I will run out of Blog space for today, so this is really part 1 of a set of Blogs discussing the real cost of verification. Here we go:

With semiconductor design at its core, seven major trends are driving the development and verification requirements.

  • Further miniaturization towards smaller technology nodes leads to constant growth in design complexity. There is simply more and more content to verify.
  • While the complex designs are increasing in numbers, their growing development cost and decrease in overall design starts leads to increased pressure to reach proper returns on investment and to require first pass success to ensure proper monetization.
  • The need for flexibility drives on-chip programmability with a strong trend towards multicore architectures combined with a rapid increase of embedded software content.
  • IP reuse in both software and hardware is exceeding 60% of the semiconductor content and is growing fast.
  • Low power requirements drive design teams to provide more functionality within constantly shrinking power envelopes.
  • An increase in the analog/mixed signal portion of semiconductor and system designs drives the need for mixed level simulation.
  • Application specificity drives the need towards basic hardware/software platforms for application domains like wireless, networking and automotive

In order to develop and verify the resulting complex hardware/software systems that development teams are facing today, the hardware has to be executed – prototyped – to allow verification and software execution. Most customers I talk to have at this point decided that tape out without prototyping using hardware assisted verification is too risky. There are, however, several different types of prototypes that allow hardware/software integration and debugging. In addition, different users within a design team have potentially different needs for prototype capabilities. The choice of the right prototyping technique is driven by which scope they cover best and which use model thy support.

The four different scopes for verification are hardware blocks, hardware/software sub-systems, systems on chips (SoCs) containing hardware and software and finally SoCs and software running on it within the system environment they reside in.

The five main verification use models are:

  • Hardware verification – requiring deep insight into the hardware
  • Software verification – requiring access to the hardware interface but being OK with black-boxed hardware functionality
  • Hardware/Software verification – requiring access to both hardware and software details
  • Performance and Power Verification – confirming that architecture partitioning decisions were correct
  • Verification of system integration – making sure that given the various levels of scope, the integration of blocks into sub-systems and sub-systems into SoCs is done correctly
The challenge of end-to-end system development and verification.

The challenge of end-to-end system development and verification.

Now, the main seven prototyping engines for verification of hardware and software are:

  • Software Development Kits (SDKs), like the Android and Apple iOS development kits allowing programming against more abstract APIs like JAVA. Main advantages are speed and usability. The hardware is abstracted away so much that detailed hardware effects may get lost.
  • Virtual Prototypes allowing execution of the actual software binaries against defined register interfaces. Main advantages are execution speed and software debug/analysis. The hardware is functionally correct and can be annotated with performance information if needed, but  is not as accurately represented as some low level analyses require.
  • RTL Simulation executing the actual hardware in pure simulation. Main advantages are hardware debug with constraints, assertions, metric driven verification, as well as turnaround times when RTL changes, making it a great vehicle for verification for hardware blocks and sub-systems and when RTL is not very mature yet. Speed degrades very fast with growing scope and complexity, and while software can be executed on RTL representations, it is so slow that software development and debug efficiency is very limited.
  • Acceleration combining RTL Simulation and hardware assisted verification. The test bench executes in simulation on the workstation and the design under test executes in the hardware. While speed increases substantially – factors of 100x to 1000x over pure simulation can be reached – the main advantage remains the set of advanced debug capabilities. Speed is still not sufficient for efficient hardware/software development and debug. In the project flow RTL is still not mature enough to make investments in mapping into FPGA based hardware a reasonable investment. Bring-up and debug turnaround time are still key criteria for the right engine here.
  • Emulation executing the full design within hardware, either using synthesizable test benches or being connected to the real word using rate adapters (SpeedBridges). Main advantage over simulation and acceleration here is speed – emulators reach into the MHz range which enables software development including OS bring-up. From emulation, users still expect a couple of different RTL spins to be brought in per day as the RTL has not matured far enough. Debug insight into the hardware is key, as is the ability to connect to the system environment using rate adapters – think of connecting a networking chip to Ethernet to directly connect to the internet.
  • FPGA Based Prototypes executing the hardware in an array of FPGAs offer even higher speeds reaching the range of 10s of MHz. This main advantage allows software development with software debuggers attached using JTAG, as well as hardware regression usage due to the increased speed. Connection to the system environment can be at speed if the FPGA based prototype is fast enough. The downside of FPGA based engines lies in the slower bring-up of the design and much more limited hardware debug, including limitations to execution control – i.e. starting and stopping the design.
  • Finally, once engineering samples are available, prototypes based on the actual silicon can be used. Speed is as the developers intended, but debug insight into the hardware is limited to what has been made available using On Chip Instrumentation like ARMs System Trace Module (STM). Software debuggers are attached using JTAG and allow software development. Like FPGA based prototypes, control of execution is very limited.

Four levels of scope, five main use models and seven execution engines – sounds complex, doesn’t it? The figure associated with this post shows overlaid over the seven engines the main use models I had indicated.

Well, now that we defined the complexity of a system-level verification environment we can discuss the cost, but as indicated I am out of Blog space for today. I started touching on some of the specific cost aspects in my previous post … look for the next post to in which I will touch on the cost of ownership aspects in more detail.

—Frank Schirrmeister is group director for product marketing of the System Development Suite at Cadence.