ESL concepts have been talked about for years, but the tools now exist to turn those concepts into reality.
By Jon McDonald
Just because a problem can be solved doesn’t mean it has been solved.
Last week I was on a panel at the ISLPED conference in San Francisco. This conference is focused on low power, and the panel addressed some of the things that are being done and some things needed for low power analysis exploration and trade-offs. While the panel was very interesting, one question that was especially interesting came up. Toward the end of the panel discussion a professor stood up and asked the panel what all the fuss was about.
According to him, all of the problems and solutions we had been discussing on the panel were solved 10+ years ago. Why were we treating our ideas and issues as something new when there is significant published work addressing these issues? His comments resonated with a common perception I’ve seen when people talk about what is being done in system-level design. I think the real crux of his statement applies beyond the low power focus of the statement, as well.
Many people consider ESL to be a new area, a missionary market for tools and design methodologies, but how can this be true? Many large companies have been doing system level work for decades; I would hazard a guess that there isn’t a large complex electronic system designed in the last decade that hasn’t had some system-level analysis done before detailed design began. So what’s the big deal now with ESL being the vogue new design area? I think this really comes to the question the professor was asking and the issue we need to understand.
Theoretically and academically we understand the system-level design problem. We understand what kinds of things can be modeled. It’s not too difficult to create a high-level model in the language of your choice for some kind of system-level tradeoff. Whether it is power or performance related doesn’t really matter. People have been doing exactly this for a long time. There have been a number of challenges associated with this process. It’s been largely a roll-your-own endeavor. You weren’t going to get a lot of help in terms of tools, models or building blocks to help you quickly put your system together.
For a large system the level of effort in building a high-level analysis model from scratch, while less effort than implementing the design, could be very significant. So the system analysis problem could be solved, but practically the level of effort required dramatically limited the widespread application and capabilities of system level design and analysis.
The big buzz today is that we now have standards for defining our high-level models, along with tools and methodologies for the design and analysis of these systems.
Standard languages such as SystemC and a standard for defining and interfacing system-level models at the transaction level, such as TLM 2.0, give us the ability to build tools and methodologies that enable system-level analysis while minimizing the effort required to put together a useful system-level analysis exploration model.
I believe the buzz today is coming from the realization and acceptance that the capabilities are available and economically viable—in fact economically necessary—to perform useful system-level analysis while focusing on the system intent. We don’t need to create all of the ESL building blocks from scratch. We can leverage all of the tools and models that are available, and we can see that many more tools and additional models are becoming available every day.
The fuss, as the original question asked, is because we have reached a transition point. We’ve moved from a theoretical understanding of what can be done in system-level analysis to a point where tools and models make it practical to achieve dramatic improvements in the design process by understanding and analyzing the ESL representation of the system before building the system.
Jon McDonald is the technical marketing engineer in Mentor Graphics’ design creation business unit.
Leave a Reply