Enabling the use of connected verification engines.
With the System Development Suite introduced back in 2011, it is worthwhile to review how the adoption of the connected verification engines has progressed. It turns out that only part of the issues to be solved are purely technical. Communication across different technology areas is key, and with that, education of a new breed of engineer may become a key issue going forward.
As a son of a teacher, and as a parent, education has been one of my key concerns for a long time. My 11-year-old daughter already has attended MineCraft-based programming classes, and besides science club she also is now attending classes to learn scratch as programming language. The geek in me applauds this, but is also thinking about what engineers need to learn in order to verify semiconductor devices more efficiently. Can everything an engineer needs to verify actually be learned? Given that abstraction and modeling are not easily learned, what role does talent play?
As the picture below shows, the stack of items to be developed spans from hardware to software. Within hardware, one finds for the hardware stack IP blocks, sub-systems, fully integrated chips, chips within boards. Within the software stack, users have to develop software drivers, operating systems, middleware and applications. Creating abstractions for software development happens at several levels, as indicated on the right. Drivers are associated with IP, operating systems and middleware are associated with sub-systems, while the end application accesses the full chip. For example, IP blocks like USB and PCIe will have drivers that mainly depend on the IP block itself, but need to be ported to different operating systems like Linux, Android and RTOSes.
Android will mainly depend on the compute sub-system used for execution. Graphics middleware will depend on the GPU sub-system on which the graphics are computed. For communications middleware, the OSI layers of a communication stack will run on an RTOS on the modem sub-system. Finally, end user-visible applications like a traffic guidance system will put it all together, computing, displaying graphics, and receiving and sending GPS data as well as the latest traffic information via the mobile network.
Even within the software and the hardware-software interface, this requires three different types of knowledge: IP-specific knowledge across various IP blocks, sub-system-specific knowledge spanning several types of sub-systems like compute, graphics and modem, and then, of course, the system-on-chip integration. On top of that one can layer specific clusters of expertise like low power, performance and cache coherency.
When it comes to verification, the right-hand side of the graph shows the timeline of a development flow and the various development engines used in its context. The graphic neglects that the scope of the different engines may be different, depending on whether IP, sub-systems or the full chip are verified. It focuses in on whether the object of verification is hardware or hardware/software, as well as which scope of software is verified.
Again, different clusters of expertise emerge here very fast:
As with hardware/software mentioned above, orthogonal to the different clusters of expertise, you can find specific expertise for low power, performance, cache coherency and so forth. With low power, for example, experts need to understand both the dynamic effects coming from emulation and simulation, as well as the implementation effects of silicon technology. This is where, for instance, our tool linkages come in, from Palladium Dynamic Power Analysis providing activity data with Joules power estimation from RTL to implementation.
Bottom line: No one single person is able to understand all the effects I described above equally well. To make things worse, each of the individual areas is growing in complexity. As a result, communication between the teams becomes key. Just as a house has several windows providing different perspectives into it, we as EDA tool providers can provide different perspectives into the same design for software developers, hardware developers, and experts on low power and performance. A good example is our Indago Debug Platform, in which users can look at cycle-accurate synchronized hardware and software data. This way, both types of developers can interact efficiently in the same environment.
Back to education, how do I guide my daughter in terms of what to focus on? A detailed expertise area (like software, hardware, low power) is absolutely required, but it is equally important to understand the systemic aspects of what is being worked on. Finding the right balance between depth and breadth of understanding will be hard to define.
Leave a Reply