The ESL Conundrum

All the pieces aren’t there yet, but some of the pieces have been in use for years.

popularity

As Moore’s Law continues its relentless march, the “electronic system level” (ESL), which is the next higher level of abstraction above the register transfer level (RTL), continues to be adopted as an answer to the ever-increasing complexity of designing semiconductors.

Although ESL emerged about five years ago, the term itself still can confound the very community that seeks to embrace its benefits, and for good reason: its use is evolving over time. ESL can be implemented around either modeling using SystemC or with auto-generation of those models, but those approaches have taken different routes.

The SystemC language emerged as a way to model products using SystemC for architecture definition and software development, but stalled as it was extremely difficult to model all the different components in a system or in a design manually using SystemC as the language, says Simon Bloch, VP and general manager for the design, synthesis and FPGA divisions at Mentor Graphics Corp.

“The intent of SystemC was to introduce transaction level modeling and it did,” he says. SystemC grew into the transaction-level modeling (TLM) standard. “TLM is the key because it’s not about the language in general – many languages can do the same job – it is about the level of abstraction, which is what TLM describes.”

Given that the task of manually modeling the representation of the system was very difficult, tool adoption was slow. Still, TLM provided the move up to the next level of abstraction from RTL, and the next step was to be able to automate the generation of TLM wherever possible, Bloch notes. More and more tools are entering the market that create or generate SystemC/TLM by using different means to represent that specific area of functionality.

TLM is key

Bloch: TLM is key

When SystemC is generated automatically, it has many more chances interoperate between different blocks than if it was generated manually because there is no misinterpretation of language. When a machine generates the model, it is one interpretation as opposed to a number of people possibly misinterpreting how the language is being used.

Chris Rowen, founder and CTO of Tensilica Inc., says part of the reason why the language issue is difficult to nail down and get consensus on is that what the industry has long called ESL as an advanced, higher level form of development has in many practical ways become extremely mainstream.

“Just the march of Moore’s Law dictates that when you get to 65nm and 45nm kinds of designs at the chip level, it is inevitable that you are dealing with multiple cores. It is inevitable that you are having to do much more complete and sophisticated modeling of the interaction of a lot of different subsystems because you have a lot more subsystems per chip,” he says.

“I think part of the reason that there’s some confusion about the labeling is that a lot of what’s going on is an incremental, evolutionary development from the kinds of standalone processor development, standalone RTL simulation and modeling into something that’s just programmatically necessary to get a chip pulled together. There is also the more principled, high-minded, advanced thinking about what should in theory happen when we have hundreds of cores, or when we have to find the next factor of 10 in designer productivity. So there is a bit of this top-down thinking about the fundamentals that are happening at the same time,” Rowen notes.

While he believes the majority of engineers go at any task in a evolutionary way, he says the larger principles of ESL are influencing what’s going on in a meaningful way. It is extremely helpful for people to understand that systematic mechanisms are needed to deal with modeling of multiple processors and other blocks working together.

Meaningful changes needed

Rowen: Meaningful changes needed

“It is people who have come from an ESL mindset with a capital E, a capital S and a capital L, who have worked through a lot of the questions of how to bring together diverse models with things like SystemC or SystemVerilog. ‘How am I going to think about programming a collection of heterogeneous cores in a system? How am I going to think about the connection between truly high level modeling environments like Matlab, and the practical, on-chip implementation necessary to do low level embedded programming and low level RTL development?’ Those are very much issues that have been driven by, and where standards have been set by, people from the formal ESL community, whereas the practitioners almost all come from a community of people who have to get the next chip out the door and they want to reuse as many of their existing blocks, languages and methods as they can in getting the job done. So they are going to pick and choose in terms of what they do,” he notes.

Convergence of prevailing ESL approaches

While ESL can be implemented by either modeling using SystemC or by auto-generation of models, which can seem like rather disparate approaches, these paths may be heading towards convergence.

Synopsys’ Frank Schirrmeister, director of product marketing for system-level solutions asserts, says that “ultimately those paths will converge and we’re definitely working toward being able to have one model that you can use for a virtual platform and for implementation.”

Goal is one model.

Schirrmeister: Goal is one model.

However, he says these different paths have very different requirements in terms of what you enter into the language. “For a model used for software development, you just need the register definitions and some cores definition of the function behind it. But you don’t need to put any implementation information in there. To automatically get to the implementation from a higher-level model you need to give the tool some more hints as to how you want things implemented. So there are different aspects that play and while we are trying, we are not there yet in terms of having one model to drive all.”

There also are a few roadblocks to address along the way.

“These requirements are somewhat opposite. To get a model for software development, you mostly care about speed. To create a model that can be implemented, you care about the implementation aspects and full expression of the functionality together with some constraints. So you can combine these two, but you will make sacrifices on both. It may not run as fast as you like if you would have optimized it just for simulation or it may not be implementable as well. Potentially, users may have to make some sacrifices there,” Schirrmeister explains.

There are two aspects at play: the implementation aspect, which is working for IP blocks, and the integration aspect where the system is integrated and deployed to the software developers and the verification engineers. “It’s a very similar picture as it is in IP reuse at the RT level. You need libraries and so forth,” he says.

These dual aspects at play are the reason Synopsys has started to separate out its design and system-level libraries (the transaction-level libraries) from the implementation tools, Schirrmeister says.

From this perspective, he believes that questioning convergence may be a bit misdirected because “for integration you are talking about another level than when you are talking about implementation. Nobody would try to automatically generate from a high level description the full chip containing three ARM processors and DSPs and peripherals. Hierarchy plays a key role there. That’s why the notion of converging those models is a little bit of a sketchy one – you would never do it at a chip level anyway, and for the blocks and subsystems, the question is, whether you can have the same model to drive the implementation and the fast model you use for software development.”

Next steps for ESL?

Although ESL convergence may not be able to be counted on, the use of processors can.

“People will go on using more and more processors, partly because processors are easier to model and they take some of the heat off verification in the sense that every function that is moved to a processor becomes programmable and therefore the sting of a bug in software is widely understood to be less than the sting of a bug in hardware. People really do want to reduce their bugs from hardware and software. Part of how you get there is by making a system more programmable,” Rowen says.

Programmability is one of the biggest uses of Moore’s Law silicon scaling as it addresses what to do the transistors. “You add that level of abstraction that is implied by a processor to be able to make the behavior of the system softer, to make it more tolerant of changes,” he notes.

“Going forward,” Rowen continues, “there will also be a need to deal with description, modeling and development more in terms of application-domain-directed languages and modeling since increasingly more and more tools will be available that are oriented towards wireless communications or video processing or network protocol processing and storage management.

This is so because of the difficulty in describing the behavior of a system without having some domain context or some tools that allow for rapid description of the expected behavior, and description of the algorithms being used. “Certainly this is a place where tools that Matlab have gotten established. It’s actually quite striking, though, how few effective links there are in place today between high level description in Matlab and low level verification and implementation a the software code and RTL hardware level,” he says.

“Certainly people have talked about automatic links for years but when we go out and talk to people about how they actually use the tools, there’s usually one set of engineers who sit in Matlab and develop something and then they write a document which they hand to the implementers and say, ‘this is what you are supposed to do.’ There may be some very sketchy passing of the algorithms or the expected behavior back and forth, but it’s actually still pretty primitive,” Rowen notes. Tensilica has recently worked with customers to develop more substantial automated links between its modeling tools and the Matlab environment and Rowen said he has been quite surprised how excited customers were to be able to actually pave that stretch of road.

ESL objectives

The objectives of ESL can be vast and lofty which has led some things to be labeled “ESL” just because they are above RTL, but this can be too much of a stretch, Mentor’s Bloch points out. “If a tool does not create TLM, i.e., it does not represent a hardware architectural system with software that can be implemented, we don’t call it ESL. A tool that is elevating the design abstraction above RTL, which is the TLM level, and can either operate at the TLM level or can create TLM as a result of its output, is an ESL tool.”

Along these lines, Bloch sees four primary design objectives for ESL: to improve productivity of creation of algorithmic systems through synthesis; improve productivity of system hardware verification; to enable hardware-dependent firmware and software development through virtual prototypes; and to enable architectural system exploration for performance, power and area optimization.

Tools are gradually coming online that are beginning to automate some of the tasks that were manual in the past and are beginning to provide better modeling, analysis and verification capabilities. In terms of tool adoption, Bloch believes it the amount of pain the customers feel will drive them to more quickly adopt these technologies.

Part of that pain is the issue of power, although a new TLM power modeling solution will automate much of the TLM power modeling while retaining very good accuracy to gate level analysis.

Another recent advancement that is expected to help drive tool adoption is the recently approved TLM 2.0 standard, which many say is a big step forward in enabling ESL methodologies to progress thanks to the separation of function from timing definition of the model. This is significant because in the previous TLM 1.0 definition, timing and function were so tied together that if somebody wanted to move to a different refinement of timing – which TLM is supposed to allow – they had to rewrite the whole model.

Now that timing is separate from function, it means that automated tools can incrementally back-annotate timing into the model by making it more accurate through the design flow without touching the function, which ultimately enables the automated tools to do their jobs well through the use of standard modeling techniques.

Pieces still missing.

Smith: Pieces still missing.

Still, there are still holes in the ESL design flow, asserts Gary Smith, founder and chief analyst at Gary Smith EDA. He says at this point, ESL is still manual, tedious, and very error-prone. By his reckoning, the industry still needs to deliver the following technologies to complete the ESL flow:

  1. Standards: Behavioral level standards. The architectural level standard was fulfilled in TLM 2.0.
  2. Intelligent test benches: To cut down verification costs to 35% of the cost of the design, no matter the size.
  3. Control logic tools.
  4. Mature power analysis tools.
  5. The silicon virtual prototype: Otherwise there isn’t a handoff point.

What is left of the ESL revolution?

Looking ahead to what remains of what had been called the ESL revolution, Smith says, “We haven’t even really touched the software part of it, and that’s really where the money is. We really need concurrent software tools in order to do multicore and are just starting to get tools out.”

From Rowen’s perspective, the most important work going forward is scaling the scope of modeling to keep up with the hardware and software complexity of the system.

“The high level connection to domain-specific modeling environments is a piece of it, but it is not the central piece at all. It’s just the fact that we’re dealing with chips that are routinely tens of millions of gates whereas when the ESL hype machine started we were dealing with big chips of a million gates of logic,” he says.

Rowen wouldn’t describe ESL as a revolution any more than silicon scaling is a revolution. “It is a major, ongoing, exponential change, but it’s a predictable exponential change and it is likely to continue for some time. I don’t think there is some watershed event and everybody wakes up one day and suddenly the world is different around them. It is a world of incremental improvement of a gradual but inexorable shift towards more programming, toward more multicore, toward more focus on integration by the SoC developer and less on the individual subsystem components.”

“[ESL] really goes hand in hand with a clear megatrend that says semiconductor companies every year look more like systems companies. They spend a larger fraction of their engineering on software development and are working through the system issues so they are integrators of components and they are purchasing more of those components or using automated tools to generate more of those building blocks components,” he adds. “The focus is much less on, ‘Am I designing the perfect processor for this, or the perfect RTL block for that?’ It’s how do I acquire it, reuse it, or automatically generate it, so that I can focus more time on the interactions among those pieces or the programming of those pieces.”



Leave a Reply


(Note: This name will be displayed publicly)