The scope of system-level design has changed, but it still holds great promise for designers.
At DAC 1997 – 17 years ago – Gary Smith coined the term “Electronic System-Level” (ESL) design. Around the same time I entered EDA when becoming part of Cadence and became very involved in ESL. Things have changed over the last 17 years quite a bit. While some of the predictions did not come true, others definitely did. Over the last couple of years the tools to be counted as part of system-level design have changed significantly – emulation and FPGA based prototyping definitely are now central parts of system-level design flows, together with technologies raising the abstraction above the Register-Transfer Level (RTL).
Incidentally, that same year when Gary coined the term ESL, my life changed quite a bit: I personally changed continents by moving from Europe to the United States, changed jobs from leading engineering teams for chip development to the “dark side” of technical product management, and moved in with my girlfriend who had claimed me as her souvenir from her time working in Germany.
Up to that time I had developed software – the audio recorder on Atari ST and Atari Falcon called 1stTrack helped finance my university time – as well as hardware – the last chip development effort I led was the Fujitsu MB87L2250 MPEG-2 Video/Audio Decoder. Suddenly I found myself in the world of EDA and soon became the product manager for the “Felix Initiative,” a very ambitious foray Cadence took with key partners into system-level design, or ESL.
I have been an ESL addict – and ESL-Aholic – ever since. The opportunity to be part of actually moving hardware and software design to higher levels of abstraction was simply too intriguing. In 1999 – 15 years ago – I drew for Alberto Sangiovanni-Vincentelli and Rahul Razdan the first version of what Grant Martin later called an “Original Schirrmeister.” My graph on how the industry changed from layout to transistors to gates to RTL seemed to imply the inevitability of the next step upwards in abstraction above RTL.
Well, as my team and I are preparing for Design Automation Conference 2014, I am looking back and have to admit to myself two things. First, the time to inevitable may be longer than one can plan. It has been 15 years and we certainly have not fully moved above RTL as design entry, at least not at the full system-on-chip (SoC) level. Second, even if a step to higher levels of abstraction is available, it is only worth taking if it makes commercial sense. And it would only make commercial sense if the additional step saves in the overall development flow more than the actual effort required to make the step, as I outlined in an article appropriately called “When One Plus One Has to be Less Than One.”
Drumroll… Let’s go back 13 years to DAC 2001. From a presentation I gave for Cadence at that time, I put together the diagram below that shows the flow I thought would be old and broken now, together with the flow I thought would take over in the future.
So how correct was this assessment from 2001 and the prediction made at that time?
There is a distinct separation between “IP Block Authoring” and “IP Block System Integration.” This in itself is right on, especially when taking into account 60%+ reuse of hardware blocks we have today in 2014. Arguably we may have to add another intermediate unit of measurement between block and system: the pre-defined HW/SW subsystem like the application-specific sub-systems from Cadence’s Tensilica product line. These sub-systems become blocks to be integrated into the SoC.
Quite correctly, I separated software and hardware in the “block authoring” area. But, the software I was taking into account in the diagram from 2001 seems to be limited to “bare-metal” software. We simply did not represent the stacked structure of operating systems, drivers, and middleware at the time. It did not seem relevant to hardware. So software has become much more complex in the last decade, and somewhat better understood by EDA and the hardware world.
In the upper portion of the illustration I pointed out that first implementing all blocks of hardware to RTL, also implementing all blocks of software to assembler and then in a second step attempting an integration at that level, would not work. The associated text on my slide actually said that “yesterday” the integration of Implementation-level models happens after the module implementation and after partitioning has been decided. I did argue that going forward this approach would fail, especially if users did not assess implementation aspects earlier. In the bottom portion of the illustration, in the “new flow,” I called for performance characterizations to be annotated to higher-level models. I then suggested to “simply” elevate the system integration to a higher level of abstraction, with partitioning decisions guided by the characterized performance models.
So where are we now, more than a decade later? A lot of the integration still happens at the RTL level, partly because EDA has provided engines like our Palladium XP series emulation platform that “bends” what a system is. At up to 2B gates capacity, most chips can be executed at MHz speed within an emulator. That’s why Gary Smith put emulation at the heart of ESL flows last year. For pure software development, we have done part of the step upwards as suggested in 2001 with virtual platforms. It has been quite an uphill battle, though, because of model availability and accuracy vs. speed issues in transaction-level models. And it is important to note that the abstraction happens on the hardware side only. The software runs as “actual” software at the assembler level, loaded into processor models.
Especially over the last couple of years, we have seen more and more “hybrids” of portions of the design executed in full hardware accuracy, like a GPU, with some parts of the design, like the processor sub-system, running in a virtual platform. We just this week announced how CSR was experiencing up to 200X speedup over pure emulation speeds using a Palladium XP – VSP hybrid.
The item I was wrong about the most is probably the “characterized performance.” In most areas outside of virtual platforms for software development, users simply do not utilize high-level models with characterizations to make decisions. The two most prominent examples are processor models and complex chip interconnects.
Looking at processor models one can get from ARM, only cycle-accurate models one can acquire through the Carbon portal are relevant to this discussion, as are Fast Models without detailed timing that are available from ARM and integrated into our offerings from partners like us. There are no “in-between” models. For most architecture decisions, architects will rely on the cycle-accurate models that have been automatically derived from the RTL, or simply use the RTL in a fast engine like the Palladium XP emulation platform directly.
For chip interconnects, the complexity has become so big that users not only need tools like ARM AMBA Designer to create the interconnect RTL, but also make their decisions at the RT-level using tools like the Cadence Interconnect Workbench. The alternative of using a more abstract model and use annotations has proven to be inaccurate for complex interconnects.
So it has taken us 15 years to get closer to the seemingly inevitable higher level of abstraction as implied in the graph I drew in 1999. While for IP development with high-level synthesis we are arguably getting closer to SystemC-based IP development, we are nowhere close to completing the full step upwards at the chip level. And given the issues of practicality, that, for example, some questions like the configuration of the interconnect can simply not be answered at higher levels, it is questionable that the full step to higher levels of abstraction is likely to happen soon or at all.
Am I “clean” and “off ESL”? No way! I am still an ESL addict. But – the scope has simply shifted. As mentioned above, tools like emulation have helped to “bend” the definitions of systems that can be executed at the RT-level and, with that, the definition of tools to be considered part of an ESL flow.
Leave a Reply