Do we really need a single executable specification? Maybe not.
Recently there have been a lot of discussions again about the next level of design abstraction for chip design. Are we there yet? Will we ever get there? Is it SystemC? UML/SysML perhaps? I am taking the approach of simply claiming victory: Over the last 20 years we have moved up beyond RTL in various areas—just in a fragmented way. However, the human limitations on our capacity for processing information may be holding us back to get to a fully unified single design entry that could become the “Universal Executable Specification” from which everything is derived automatically. But then again, help may be on the way from the verification side, where we are working at fast pace to find a new way to express system scenarios, which may well become a design entry level as well over time.
Let’s roll back a little bit. This is actually a pretty personal story, as I have to admit a lot of wrong assumptions I made 20 years ago. At that time, right about 1995, I had led several chip designs and was sort of running out of abstractions. Originally I had started with full custom layout and gate-level design and the advent of logic synthesis had promised to move us up to the Register Transfer Level (RTL). Synopsys Behavioral Compiler had been introduced in 1994 and I believe Aart De Geus had optimistically predicted a significant number of tape-outs before we would reach the year 2000. Gary Smith created the term ESL in 1996, the same year that the Virtual Socket Alliance was founded. In 1997 Cadence had announced the Felix Initiative, which promised to make function-architecture co-design a reality and had tempted me to move to California to run its product management. We were well on track to simply move upwards in abstraction, lightning fast.
So today, 20 years later, have we moved up in abstraction? Yes! Have we moved up in the way we thought we would when predicting the future back in 1995? Hell no!
The fundamental flaw of the assumptions back then was, at least in the Felix initiative, the idea that there would be a single executable specification from which everything could be derived and automated. What happened instead is that almost all development aspects moved upwards in abstraction, but in a fragmented way, not leading necessarily to one single description from which they can be derived. Besides the separation of hardware and software, with the advent of IP reuse, one can really separate at this point the creation of new functions and their assembly.
For each function, design teams have eight basic ways to integrate it into their development:
1. Re-use a hardware block if readily available as hardware IP.
2. Manually implement the function in hardware as a new block. It’s the good old way.
3. Use high-level synthesis to create hardware as a new block.
4. Use an extensible and configurable processor core to create a hardware/software implementation.
5. Use tools to create an Application Specific Instruction Processor (ASIP) with its associated software.
6. Use software automation to create software from a system model (like UML or SysML) and run it on a standard processor.
7. Manually implement software and run it on a standard processor.
8. Re-use a software block if readily available as software IP.
Cases 3 through 6 all use higher-level descriptions as entry point, but each one is different. High-level synthesis is driven by transaction-level descriptions in SystemC or C; ASIPs as IP or tool-generated are using specific descriptions like NML, LISA or the XTensa description TIE language for offload. Software can be auto-generated from UML descriptions. The closest higher-level unifying description is actual SysML or UML, as well as proprietary offerings like Mathworks Simulink, from which both hardware blocks and software blocks can be generated automatically.
(The graphic in Ann Steffora Mutschler’s story illustrates the different options, with the yellow boxes indicating higher-level descriptions.)
Now, when it comes to connecting all the hardware blocks together, regardless of whether they were re-used or built with one of the six options above, the user has four different options (also shown in the bottom portion of the graph in Ann’s story).
1. Connect blocks manually (good luck).
2. Automatically assemble the blocks using interconnects auto-generated by ARM AMBA Designer, Sonics or Arteris.
3. Synthesize protocols for interconnect from a higher level protocol description.
4. Use a fully programmable NoC that determines connections completely at run-time.
With the exception of the first (manual) and last (at run-time) way to create the on-chip interconnect, the other items raise the level of abstraction. For example, the ARM SoCrates and AMBA Designer environments feeds information into Cadence tools like Interconnect Workbench to set up the scenario for which performance analysis is needed, and there are specific tools to auto-create configurations of different interconnect topologies from higher level descriptions as well.
So we have moved up in abstraction in many ways. Victory!
Good, but do we have the universal executable specification in UML, SysML or some proprietary description from which everything can be derived? No. And on first sight, it’s unclear how we would get there. From what I can tell, no single human being I’ve met so far can comprehend all aspects of the hardware/software mix to express a unified description. Complexity has grown too much and will probably continue to do so for high-end designs. So it really may have been a human limitation of grasping the growing complexity that has so far prevented us from effectively getting to a single higher-level executable description. However, on second sight, we may get there from a different angle, one that is currently under development in the Accellera Portable Stimulus Group, defining verification scenarios.
We are seeing even more specialization happening when it comes to verification, as currently witness in the Accellera PSWG. When defining scenarios, three items are important.
• First, users are looking for vertical re-use of their verification environment from IP to systems.
• Second, they are using for horizontal re-use from simulation through emulation, FPGA and the even the full chip.
• The third item is scenario re-use across disciplines — will a cache coherency test, by itself clearly defined, still work when the design is powered down and comes back up? The exchange here is between two different specialists, each one not knowing too much about the other, but now with UML-like descriptions being able to merge their scenarios. This is shown in the graph below – two users with different expertise in cache coherency and power shutdown have defined their scenarios using UML-like descriptions. A third user can intuitively understand them at that level and merge them together into a combined use case scenario. Voila!
Flipping this around back from verification to design, it would be very hard to express all this in a single executable specification. However, putting together a set of scenarios defining what the system specification should be, driven by the different expertise-stakeholders, becomes possible with technologies like Perspec_System Verifier, our commercial tool in the Accellera PSWG space.
So we have moved up in abstraction as illustrated above for blocks and their assembly, we are still quite a bit away from finding one, unified description from which everything in a complex design can be derived. But the scenario definitions as illustrated above, may eventually bring us to that point of higher-level design entry. And until then the separation of IP creation and IP reuse from IP assembly is working pretty well, actually, so timelines for moving up there are not clear yet.