Cross-Talking with TLM 2.0

The new standard isn’t perfect, but it’s a big first step toward raising the level of abstraction in system-level design.

popularity

By Ed Sperling

It’s almost like flying over the Great Plains of the United States. On the ground it’s hard to see above the corn stalks, but in an airplane you can see the entire horizon even if you can’t see those stalks anymore.

The analogy is similar to where most of the major players in chip design say the engineering for systems on chips needs to go. With millions more gates available at each new process node, compounded by multiple power domains and incredibly complex timing issues, scrutinizing detail at the RTL or pin level is becoming less important than seeing the big picture and drilling down from there. The diagram is a lot easier to plan, follow and verify at a higher altitutde, even if the details are a little blurry.

This top-down approach is the basis of the new Transaction-Level Modeling (TLM) 2.0 standard created by a working group of the Open SystemC Initiative. It’s still not perfect—in fact, some engineers say it’s a long way from that—but it’s a lot better than what was there before. And it opens the door for more concurrent design possibilities so that increasingly complex SoCs can be developed at least as quickly as previous generations of chips and pieces can be re-used much more easily.

What’s new?

The first attempt at raising the level of abstraction into what became known and overhyped as electronic system-level design, or ESL, was the TLM 1.0 standard, which was introduced in June 2005. However, TLM 1.0 didn’t allow engineers to bridge together various different tools, so verification engineers had no easy way of linking back to the design engineers or the chip architects. Some chip developers developed their own proprietary bridges between those areas, with inconsistent levels of success.

TLM 2.0 adds structure to this confusion, allowing users of tools that comply with the standards to build models that can be tested and verified across an entire system on a chip.

“The biggest challenge today is models,” said Glenn Perry, general manager of ESL/HDL Design Creation at Mentor Graphics. “If you get some from your IP vendors and some from your internal modeling group, there is no guarantee that these models work together. Historically there have been different protocols for the way these models interface and communicate with each other. TLM 2.0 provides a broad communication standard that makes it easy for these models to connect. The tools themselves—only recently a few EDA vendors have taken it into the mainstream. Design analysis, synthesis and verification.”

How chip developers define those models determines the speed and granularity for developing design components and generating test results. TLM is an abstraction of a design. It can be analyzed and simulated, and engineers can interact with it in various ways. For example, loosely timed abstractions give more general results more quickly, while more exact timing models generate more detailed results—although more slowly. These are the kinds of tradeoffs designers will need to consider in the future, as shown in the following diagram from the Open SystemC Initiative, which developed TLM 2.0.

Regardless of which approach is taken, however, all the tools have to work in the same sandbox. “There are two kinds of design flows,” said Jakob Engblom, technical marketing manager at Virtutech in Sweden. “One is from the system guys, who build a box, not an SoC, and then they make the hardware work. The second is from the software developers, who have to make the software work with the hardware. Clearly, there is more and more need for concurrent design, and it needs to be done on the system level, not the component level. An SoC is more than a hardware flow. There is only value at the hardware level if you can add a higher level of abstraction to run software, too.”

Engblom said that simulation at the pin level is no longer viable because it takes too long. There are simply too many parts to simulate—millions of gates, multiple power domains, complex memory and logic structures and shared busses. That job becomes even more complicated when you factor in multiple cores and the software that needs to be developed to take advantage of those cores, and in future iterations of chip development, stacked dies.

The tradeoff with TLM 2.0 is using a loosely-timed abstraction. At every level you gain performance and lose detail. But you can connect that to more detailed models where you need them. This is a standard, not a device library. You use it to build models and test at the level of abstraction you need. There’s still no getting around creating a system map and a lot of hard work, but the choice is simulating small with a great level of detail or simulating the big picture with a lower level of detail.

Debugging

That’s particularly useful in the verification world, which accounts for about 70 percent of the time it takes to develop a chip. While models have to be relatively accurate in the design phase, they have to be 100 percent accurate in the verification phase. There is little future for companies that develop chips that do not comply with the original design, or which fail unexpectedly.

What has been frustrating in this area, however, is that verification engineers are working with massive amounts of data and no effective way to pinpoint where bugs are. As a result, they have to pore through all the data to find the bugs. Functional verification proposes to move verification up a level of abstraction, as well, but the models still need to be integrated into the overall system. TLM 2.0 supports those kinds of models, which ultimately may reduce the time it takes to debug complex chips.

“What’s changed is that now you can build all the models in a way that’s useful for them,” said Mike Meredith, president of the Open SystemC Initiative, which created the TLM 2.0 standard as part of a working group involving all the major EDA vendors as well as companies such as STMicroelectronics, Broadcom, Texas Instruments, Infineon, and an array of ESL startups such as Virtutech and Meredith’s own Forte Design Systems. “The standard is agnostic about the processor and the busses.”

TLM also allows engineers to describe a test bench in transaction terms. That means data can be looked at functionally rather than trying to understand the individual bits in a transaction. But you can have a TLM-based test bench and still be relatively lost if you haven’t evolved the debug platform, said Perry. “One nice addition is the debug transaction interface. You can do really intelligent things with this.”

The future—more speed, more tweaks

But that intelligence may be limited to certain areas of chip design. Said one engineer, who asked not to be named: “My basic complaint regarding TLM 2.0 is the TLM working group focused too much on memory-mapped buses and the SOC or single ASIC as a focus. In my opinion they left the system out of ESL. In our systems the processor is important, but the processor and its surrounding registers and such are only 10% or less of a single ASIC, and frankly are not really a challenge in ASIC design.”

The engineer said it’s more important to model a host operating system such as Windows or Linux, running specific drivers and customer applications, talking to a virtual storage host bus adapter, and running actual firmware, which is in turn talking to a virtual storage area network with hundreds or even thousands of attached devices. “TLM 2.0 helps us with a tiny, tiny sliver of that rather large task,” he said. “We will use TLM 2.0 when picking up models from vendors where it makes sense, but we will not be rewriting any code to use TLM 2.0. Nor will new code development use TLM 2.0 directly. I think the rush to standardize TLM 2.0 is premature since it really has not yet proven itself. TLM 2.0 has some good features. Defining the possible modeling levels and defining how they interact is good. The idea of sockets to bundle ports together is good, though it can add a large coding burden for someone trying to implement a module to conform to an interface. So in a nutshell, TLM 2.0 is okay for a vendor writing a handful of modules connected to an AXI bus, but it is less suited to modeling large systems with many custom or specialized interfaces. Sure TLM 2.0 can be forced to work in these situations, but in the end it is no better than what I have now.”

Already, changes are afoot to rectify some of these problems. Sources involved in the TLM 2.0 discussions say the next steps are improving performance in such areas as direct memory access, which includes access time between the CPU and the memory. In C++, performance reportedly is almost double what it is in SystemC, the basis of TLM 2.0.

There also is room for improvement in the future with network-on-chip designs, where multiple bus architectures have to be bridged together. Some of that capability is included in TLM 2.0, but expect enhancements in future updates on the standard.

In addition, there are some tricks being used by companies that currently are not included in the standard. One is to build more optimized allocators, which can speed up performance by three to five times. The trick is learning the methodology in the standard and understanding it well enough to be able to use it more effectively. For many companies working with TLM 2.0, the standard is just a starting point. It’s also a bridge point between various disciplines that have never worked seamlessly together—people with different areas of expertise who now must work on projects concurrently instead of in series.

“What you’re going to see is synergy in design teams as they begin talking to each other and as people learn new skills,” said OSCI’s Meredith. “As development becomes a critical part of design, you’re going to see entirely new job categories emerge.”



Leave a Reply


(Note: This name will be displayed publicly)