ESL: 20 Years Old, 10 To Go

We may have to wait another decade before ESL becomes a significant methodology, and that is dependent on some of the decisions being made today.

popularity

It is a common perception that the rate of technology adoption accelerates. In 1873, the telephone was invented and, after 46 years, it had been adopted by one-quarter of the U.S. population. Television, invented in 1926 took 26 years. The PC in 1975 took just 16 years. It took only 7 years after the introduction of the Internet in 1991 before it was seeing significant levels of adoption.

So why has it taken 20 years for ESL to gain any form of acceptance? And why must we wait another 10 years or more before it becomes a common methodology? It is only when a technology was ready for commercialization that all of those other technologies started to take off. So it is not fair to say that ESL was invented 20 years ago. It is not entirely clear that ESL has even been invented yet—or at least ESL may not be ready for full commercialization at this time. The problem lies in the scale of the problem and the fact that the requirements for ESL have changed several times.

The initial concept for ESL was purely top-down, where a single executable specification would be created, and from this hardware and software would be derived. This led to the development of (HLS), but instead of synthesizing the complete system, HLS has been relegated to a tool for block creation, and is doing very well in that role.

What happened instead was the emergence of reuse and the establishment of the third-party IP market. This meant that design could not be top down, but a meet-in-the-middle scenario was more likely. This led to the notions of platform-based design and mapping software onto programmable hardware resources. This never became a reality because, in part, the interfaces were not well enough defined.

But it can be argued that this did not fully solve the problem because verification costs have been escalating. No viable verification methodology exists that can encompass system-level verification, block verification, performance verification, power verification etc. Verification methodologies have never fully encapsulated the needs of IP integration and no ESL verification methodology exists.

We are beginning to see the first hints that these problems are being tackled, and with it may come the possibility for commercialization of ESL. Assuming it will take another couple of years before such standards are in place and have been fully verified as adequate for the task, then even with the adoption rate of the Internet, it will be 10 years before we are able to define ESL as a success.

IP integration
Where are we today? “We actually have silently moved to a higher level of abstraction,” says Frank Schirrmeister, senior group director for product management and marketing at Cadence, “but there is fragmentation of representations. There is also a pretty strict separation of design concerns between IP creation, IP reuse and system integration, and of course the impact of software. Will we get to the universal executable specification in UML, SysML or some proprietary description from which everything can be derived? Unlikely.”

There are also companies that have attempted to define aspects of ESL, including Sonics. “Gary Smith labeled Sonics as an ESL vendor the first time he met us in the late ’90s,” says Drew Wingard, the company’s CTO. “The whole idea was trying to attack the integration challenge by using an integrated network instead of a bunch of components. The separating of those components from each other was done using the concept of decoupling. Those are all about abstraction. In order to effectively configure the network they need to understand characteristics of the chip being built and the environment in which it lives. This meant that we had to create our own view of things like transaction-level modeling before that term was commonly used.”

Both of these suggest that a significant part of the hardware design flow will remain bottom-up, and thus tools will have to be created for IP integration and for verification before ESL is ready for commercialization.

There are significant developments supporting a top-down flow as well. “It all starts with a virtual prototype for architecture exploration,” says Tom De Schutter, senior product marketing manager for virtual prototyping at Synopsys. “This is where power and performance tradeoffs can be determined early. Task-driven traffic generators are used for this by deriving traffic from software applications running on virtual prototypes of the previous generation.”

Verification
A large challenge in hardware design today is the removal of the need to re-verify an IP block when adding it into a system. “One of the biggest barriers to dealing with the cost of systems is the continuous re-verification process that goes on,” says Wingard. “Very few design teams appear to really trust that the components plugged together in their designs don’t need to be re-verified in context of how they are being used. There is very little effective verification reuse. As we go from the level of an IP component to a sub-system to an SoC, to a chip on a board, there is far too much that ends up being re-verified at every level.”

“The first step in preventing the need to revalidate IP is to ensure some level of signoff quality has been established for the IP being considered,” says Bernie DeLay, group director for R&D within verification IP at Synopsys. “The IP provider needs to include the results of the advanced lint, clock domain crossing and low power analysis. The intent is not to provide environments to replicate what has been done, but rather to act as collateral that should be included with the IP package received by the IP consumer to eliminate potential boundary and RTL-assembly issues as you move to SoC integration. Since a large number of IP blocks are configurable, other keys are the configuration(s) that were utilized for this testing.”

“UML-like descriptions and languages that describe scenarios are necessary,” says Schirrmeister. “This is the next level above SystemVerilog for verification, and will be a hallmark for verification in the next decade. This is when verification shifts to the system-level and we have to rely on IP being largely bug-free.”

Accellera is currently working on the definition of these scenarios under the purview of the . If defined correctly, they could become the driver for an ESL methodology. But at the moment their focus is purely on the definition of stimulus patterns that can be run on multiple representations of a design, going from virtual prototype, through emulation and simulation, to real chip. This is an important part of the problem, but may indicate that it is not yet ready to drive the full ESL flow.

Restricting the flow
In the past, developments often have required reigning in the degrees of freedom in the design process, and while there is always resistance to this, the benefits often finish up outweighing the restrictions.

“We are getting good at interfacing so that we can limit the interactions between a sub-system and the rest of the system,” says Wingard. “Consider that many sub-systems have a good working collection of memory inside of them, but they all need access to global shared memory of some form. If this is not a protocol problem about how they communicate with that shared memory, it is a performance problem.”

This leads to new tools that are required. In the RTL flow, gate-level simulation was used to verify that timing constraints were being met. “That got out of hand,” says Wingard. “So there was formal technology developed called static timing analysis—a tool that could figure out if the circuit works across all possible corners. We need the equivalent in performance. We need the ability to do static performance analysis where, effectively, we can write equations that describe the performance of our systems. Like power analysis, this will infer some extra work on the people who provide the components. This will enable them to describe the performance characteristics and constraints/requirements of the blocks that are being plugged together.”

Performance analysis also needs to be more intelligent than static timing verification because it cannot require that a design be built for worst-case performance conditions where all sub-systems are attempting to access shared resources at the same time. This again needs to be scenario-driven.

There could be problems with this approach. “Performance analysis for chip interconnect has dropped down to the RT-level simply because the transaction-level does not offer enough accuracy to make the right performance decisions,” says Schirrmeister. “The same is true for power. Abstracting power states to annotate power information to transaction-level models in virtual prototypes may give enough relative accuracy to allow development of the associated software drivers, but to get estimates accurate enough to make partitioning decisions one really need to connect to implementation flows and consider dynamic power.”

Perhaps the answer lies in selected focus and mixed abstractions. “As the end system is defined by the interaction of the many domains required to implement that system, the specification of the system will incorporate the descriptions of the system from the many domains designing aspects of the system,” says Jon McDonald, technical marketing manager for DVT Questa at Mentor Graphics. “A significant enabler will be in the growth of heterogeneous design capabilities. This means being able to leverage and link work being done at different levels and in different domains to analyze and optimize decisions. This will enable designers to make more accurate decisions with more quantified tradeoffs. A new language is not the answer, better co-existence and leveraging of existing capabilities will provide a tremendous improvement in the overall system design process.”

Adds McDonald: “Our existing verification flows will continue to be focused on the IP itself, while the system verification will focus on proper applicability and use of the IP in the system. By focusing on the use cases rather than the entire exploration space we will be able to have higher confidence that the design has been properly verified for the intended task.”

Conclusion
The industry is closer to the definition of a real ESL flow than it has ever been in the past. Such a flow will include the verification task as a primary need rather than an afterthought and this is a significant advancement on the RTL flow. If this can be accomplished, it will see adoption that is faster than RTL adoption, but we will have to wait to see what is defined first. It may be optimistic to conclude that ESL will have seen significant adoption in another 10 years.



2 comments

Richard Soenneker says:

If you’re going to write an article about a topic using an acronym, it’s a great idea to define that acronym (ESL) at least once in the article, hopefully near the top. That way folks who aren’t in that subfield can understand things a bit quicker.

the dictionary link helps, but just spelling it out once is also a useful thing.

Brian Bailey says:

You are right and I apologize. Let me correct that now – Electronic System Level. Now if you are asking me define what ESL is, well, that’s another article.

Leave a Reply


(Note: This name will be displayed publicly)