Systems & Design
SPONSOR BLOG

Education And Communication

Enabling the use of connected verification engines.

popularity

With the System Development Suite introduced back in 2011, it is worthwhile to review how the adoption of the connected verification engines has progressed. It turns out that only part of the issues to be solved are purely technical. Communication across different technology areas is key, and with that, education of a new breed of engineer may become a key issue going forward.

As a son of a teacher, and as a parent, education has been one of my key concerns for a long time. My 11-year-old daughter already has attended MineCraft-based programming classes, and besides science club she also is now attending classes to learn scratch as programming language. The geek in me applauds this, but is also thinking about what engineers need to learn in order to verify semiconductor devices more efficiently. Can everything an engineer needs to verify actually be learned? Given that abstraction and modeling are not easily learned, what role does talent play?

As the picture below shows, the stack of items to be developed spans from hardware to software. Within hardware, one finds for the hardware stack IP blocks, sub-systems, fully integrated chips, chips within boards. Within the software stack, users have to develop software drivers, operating systems, middleware and applications. Creating abstractions for software development happens at several levels, as indicated on the right. Drivers are associated with IP, operating systems and middleware are associated with sub-systems, while the end application accesses the full chip. For example, IP blocks like USB and PCIe will have drivers that mainly depend on the IP block itself, but need to be ported to different operating systems like Linux, Android and RTOSes.

Android will mainly depend on the compute sub-system used for execution. Graphics middleware will depend on the GPU sub-system on which the graphics are computed. For communications middleware, the OSI layers of a communication stack will run on an RTOS on the modem sub-system. Finally, end user-visible applications like a traffic guidance system will put it all together, computing, displaying graphics, and receiving and sending GPS data as well as the latest traffic information via the mobile network.

Education-Communication

Even within the software and the hardware-software interface, this requires three different types of knowledge: IP-specific knowledge across various IP blocks, sub-system-specific knowledge spanning several types of sub-systems like compute, graphics and modem, and then, of course, the system-on-chip integration. On top of that one can layer specific clusters of expertise like low power, performance and cache coherency.

When it comes to verification, the right-hand side of the graph shows the timeline of a development flow and the various development engines used in its context. The graphic neglects that the scope of the different engines may be different, depending on whether IP, sub-systems or the full chip are verified. It focuses in on whether the object of verification is hardware or hardware/software, as well as which scope of software is verified.

Again, different clusters of expertise emerge here very fast:

  • Virtual prototyping requires modeling, which is more of a talent than a skill that is easily taught. The key here is to know which information to abstract, combined with clear articulation of which design characteristics cannot be addressed with an abstracted model. For instance, fast processor models may well be applicable for higher-level software development, but with pipelines and other details not modeled accurately, they may not lend themselves well for performance.
  • Engineers that know RTL simulation are typically very focused on hardware verification and are experts in that field. They may be given a piece of software that executes on a processor, but would often not be experts to understand what is going on in the software.
  • With formal apps, formal verification has entered an era of “democratization,” making formal accessible to more users in the design team. There are still experts to do specific property checking, but with apps for x-propagation, connectivity and others, formal verification is becoming mainstream. Still, it is often applied by expert teams within a development organization.
  • Emulation is often executed by special teams within a company that set up the emulation workloads and distribute the results. The challenge is how to handle situations in which a bug is found. Given that emulation often includes the software, we have seen some finger-pointing between software and hardware teams. On top of that, each discipline typically is not that familiar with even the visualization of the other. A software developer may not understand the hardware waveforms, and vice-versa.
  • FPGA-based prototyping used to be a discipline in which a specialized team takes the RTL, freezes it, and does the re-modeling to map it into FPGA technology. This is often still the case when the FPGA mapping is to be optimized for speed to enable software development. For that, detailed knowledge of the FPGA is required, yet the rest of the verification engineers often do not possess this understanding. With the introduction of our multi-fabric compiler, which allows mapping both into our processor-based Palladium emulation fabric and into our Protium FPGA fabric, users are given a choice to trade bring-up time with speed, making FPGA-based prototyping accessible outside of an FPGA expert’s teams.

As with hardware/software mentioned above, orthogonal to the different clusters of expertise, you can find specific expertise for low power, performance, cache coherency and so forth. With low power, for example, experts need to understand both the dynamic effects coming from emulation and simulation, as well as the implementation effects of silicon technology. This is where, for instance, our tool linkages come in, from Palladium Dynamic Power Analysis providing activity data with Joules power estimation from RTL to implementation.

Bottom line: No one single person is able to understand all the effects I described above equally well. To make things worse, each of the individual areas is growing in complexity. As a result, communication between the teams becomes key. Just as a house has several windows providing different perspectives into it, we as EDA tool providers can provide different perspectives into the same design for software developers, hardware developers, and experts on low power and performance. A good example is our Indago Debug Platform, in which users can look at cycle-accurate synchronized hardware and software data. This way, both types of developers can interact efficiently in the same environment.

Back to education, how do I guide my daughter in terms of what to focus on? A detailed expertise area (like software, hardware, low power) is absolutely required, but it is equally important to understand the systemic aspects of what is being worked on. Finding the right balance between depth and breadth of understanding will be hard to define.



Leave a Reply


(Note: This name will be displayed publicly)