Searching For A System Abstraction

Hardware became stuck using the RTL abstraction, but system-level tasks need more views of hardware than are currently available.

popularity

Without abstraction, advances in semiconductor design would have stalled decades ago and circuits would remain about the same size as analog blocks. No new abstractions have emerged since the 1990s that have found widespread adoption. The slack was taken up by IP and reuse, but IP blocks are becoming larger and more complex. Verification by isolation is no longer a viable strategy at the system level. Without abstraction, system-level verification becomes an increasingly daunting task.

As system become larger, they also are becoming more diversified. Verification of the digital logic for an automotive controller is necessary but insufficient. It has to be verified with software, with communication to other controllers, possibly with sensors, mechatronics and other components that cannot be modeled in the discrete time domain.

To make matters worse, existing abstractions are breaking down for the newer technology nodes, meaning that additional details must be included that were previously ignored. This is adding increasing pressure on verification tools at all levels in the design flow.

The EDA industry successfully introduced the gate level and then the Register Transfer Level (RTL) of abstraction in the ’80s, and that became universally accepted in the ’90s. RTL is the only level of abstraction, all the way from block design to system verification. Higher levels of abstraction have been defined, standardized and used for certain tasks, but there are costs associated with the development of those models. There also are limited amounts of analysis that can be done with them, and most of the time they remain isolated within a development flow.

So how can the industry create a model that provides sufficient detail and accuracy for reliable predictions and design decisions, while being fast enough to perform system-level verification?

What the industry is doing
Abstraction is necessary to deal with complexity and the desire to be able to handle bigger simulation. The price of abstraction is that you must abstract away some detail. “But you must still be able to do what you need to do,” asserts Magdy Abadir, vice president of corporate marketing for Helic. “You want to do bigger things without losing so much accuracy that your results become useless. Be it simulation, analysis or tradeoffs, picking an architecture – abstraction is a useful concept, but these conditions have to be met for an abstract model to be useful.”

Some of the time design teams are not willing to take that risk. “I don’t always think it is abstraction which happens,” says Frank Schirrmeister, senior group director of product management for the System & Verification Group of Cadence. “There is a lot of brute force integration going on. A practical issue is that for systems of systems you need all of the models, all of the RTL, all the sources to feed the tools which were available at the same level for a system. Some of it may be legacy chip, so you will have to connect a board, some of it won’t be available as RTL yet so you have a model and run than on a virtual node somewhere, some may be advanced enough in the chip design process that you can throw it into an emulator, and some of it is at some other level – so there is multi-fidelity execution and you have to integrate them.”

Jean-Marie Brunet, director of marketing for the Emulation Division of Mentor, a Siemens Business, suggests a similar strategy. “More than ever, digital logic must interact with real-world systems that are analog in nature. And many of those systems can be modeled – just like digital systems. The difference is that these models often leverage a continuous-time implementation instead of discrete. They’re variously referred to as mathematical, continuous-time, or mechatronic models. In addition, there are discrete-time applications – digital signal processing (DSP) – that may also have mathematical models that are more sophisticated or abstract than what is possible to express in RTL.”

Those models have to be hooked together. “Connecting these types of models up to an emulator is fairly straightforward, and it can provide a much more thorough verification exercise than the more typical use of select scenarios expressed in SystemVerilog,” adds Brunet. “Exactly how you include such models in your emulation plan depends on the origin of the models.”

Success With software
The one area in which abstraction has been universally accepted is with processor models and within the software domain. “We can model the processor fairly accurately and it is responsive in time,” says Kevin McDermott, vice president of marketing for Imperas. “If someone has a development board available, they can quickly compare what is happening on the board with the software virtual platform. As they start to probe and debug problems, they begin to see that the observability and controllability of the virtual world provides an extra level of insight, which helps them debug their problem. That has built up a strong confidence. When people feel that it is attractive, approachable, are familiar with it, and solves real problems, it leads to acceptance.”

And abstraction goes further within software. “If you focus on a particular area, which may be modeled in a more detailed way, you can usually afford to abstract the rest of the system,” points out Kevin Brand, manager of application engineering for the Verification Group in Synopsys. “The importance of abstraction is that it allows you to focus so you don’t have a huge detailed model to verify and also allows the technology to perform in a better way. For the verification of digital systems, we try and isolate it down to interfaces. If we can have an abstraction level that still makes the DUT think that it is in the real system, then that is all that we need. Typically, we abstract all of the behavior and concentrate on interfaces.”

Software has also accepted abstraction in the development flow. “In the software world, abstraction has worked brilliantly,” says Rupert Baines, CEO for UltraSoC. “We have gone from coding in binary, to machine coding, to high-level languages, and today we see them do amazing things with Haskell or Erlang or Hadoop or Cuda. These are astonishingly high-levels of abstraction, and we have done nothing similar for hardware. You have processor models and people do processor simulations and processor abstractions. They do some hardware abstraction. But we have not been able to generalize it in any sensible way.”

Software has been going through a complexity explosion. “With the huge shift in automotive from periodic task-based programming using static scheduling to multi-threaded C++ application using a mix of static and dynamic scheduling creates a whole set of unprecedented challenges,” says Maximillian Odendahl, CEO of Silexica. “New tooling needs to be created to analyze systems with a large number of scenarios to get a clear understanding of performance margins and design stability for multiple applications running on the same compute cluster. Fast and cycle-approximate simulation techniques are needed in combination with application models that are representative of such new applications for both communication and computation.”

In fact, software has essentially divorced itself from the hardware. “Software has to do its complex task and has to worry about a lot of its own internal details and they are happy to let the hardware be as good as it can be,” says McDermott. “It is assumed that these systems are well designed. You may want to manage the number of transactions that go out to main memory to save power, but that level of modelling is done at the detailed hardware level.”

Digital twins
One development pushing system modeling in the notion of the digital twin. “This is where you have a simulation model of your system and you apply the same data as you would to a real system,” explains Schirrmeister. “You do this for bug tracking and analysis. It is a virtual representation of the real thing, such as an airplane. You take the sensor data from the airplane and you apply it to the digital twin and you can analyze what should have happened and find out where the real system behaves differently.”

While you can simulate or emulate, they still execute at a fraction of real-time speed. “Another solution is to observe at a higher level of abstraction,” says Baines. “This uses the actual silicon, so you are running at full speed. That matters because system level problems only show up occasionally and even at 1/10 real speed that would only deliver result after three months wait. You can also use real data and interactions with real world. That is critical for complex system where test cases can never really reflect the real world.”

This is accomplished through embedded instrumentation. “Instrumentation is normally added to each processor, each of the interconnects and then key peripherals such as the memory controller and accelerators that are performance critical,” explains Baines. “That gives you 90% of what you need to know and takes normally less than 1% of die area. This is not about bits and bytes, and it is not about JTAG messaging. It provides a system-level view. We report protocol performance, transaction-level statistical performance, and help people identify anomalies. Once you have the area of problem, you can zoom into lower levels of detail.”

Selective focus
Almost all recommendations incorporate some notion of mixed abstraction and selective focus. “In automotive, we make sure we have good links within the ecosystem,” says Synopsys’ Brand. “There are so many tools used within automotive. For consumer and wireless there is some hardware that you connect to and that may be internally developed or a debugger with a board, but with automotive there are so many solutions for modeling that take you through the V cycle. The infrastructure cannot be a standalone system for simulation. It is a cockpit for an ecosystem.”

“The higher the abstraction level, the more important the ecosystem becomes to enable interoperability between vendors, especially EDA tool vendors,” adds Sergio Marchese, technical marketing manager for OneSpin Solutions. “It’s the only way to address all the technical challenges of such a potentially powerful methodology.”

It can get complicated. “Closed systems such as automotive and, to a lesser extent, industrial IoT, provide an answer to the value chain coordination, but there are no easy answers for the scaling problem,” says Srikanth Rengarajan, vice president of products and business development at Austemper Design Systems. “Automotive chips have significant mixed-signal functionality and EDA Vendors provide a variety of fast analog simulators that get the user 80% of the way there for 20% of the cost. While insufficient for standard design problems such as noise analysis, tolerance and verification, it is adequate at the system level.”

Some solutions that used to be adequate are not anymore. “We have been using RC extraction to perform things like timing analysis, power estimation etc.,” says Helic’s Abadir. “The problem is that there are additional phenomena happening, such as the emergence of inductance and mutual inductance, that adds an extra dimension to the problem. This is because frequencies are going higher and physics are making it necessary.”

In this case, the selective focus has to come from analysis. “You can analyze the design quickly and find the hotspots,” continues Abadir. “By combining that knowledge with abstracted models, I can manage that dilemma. I can focus where I need to do detailed extraction and in other areas, it is sufficient to use abstracted models, be they RC only or even C or higher-level things that can very quickly provide estimates of timing or power.”

Linking models
Every abstraction boundary can lead to an integration challenge. “We link with MathWorks Simulink, which is heavily used within automotive, Saber, Vector CANoe, and others,” says Brand. “We can export our virtual prototypes as Functional Mockup Unit (FMU) blocks so they can be connected to a bigger system via the Functional Mockup Interface (FMI) standard that many third-party simulators support. It is about being part of an ecosystem. We focus on the software verification, but we are surrounded by different levels of abstracted models.”

Mentor’s Brunet explains FMU further. “There are two parts to these models. First, there is the notion of the Functional Mockup Unit, or FMU. You can think of this as a block-box model of whatever function is needed. It consists of C code and an XML file. Access to the FMU comes via the Functional Mockup Interface, or FMI. FMI is an API that encapsulates or wraps the FMU. It’s based on an open standard, meaning that FMUs can then be incorporated regardless of the tool used to generate them. As long as they abide by FMI, then any simulation or emulation environment that speaks FMI can access the model. FMI is also device-independent, making the models portable across verification systems and hosts.”

There are two types of FMU:

• Model-exchange version. The intent is to provide the mathematical definition portion of the model only. “If you’re using this model, it’s assumed that you will have your own solver that’s able to interrogate and exercise the model,” explains Brunet. “If you’re simply trying to provide the model to teams that already have their own tools for executing the models, this is the best format for you.”
• Co-simulation version. This version is better suited to verification and co-simulation. “It includes the solver along with the mathematical model, making it self-sufficient as an executable unit that can be integrated into the testbench,” says Brunet.

Ensuring relevance
In the end, abstractions and models only will be adopted if they provide sufficient value. “Tell me the list of problems that you can actually address with such a solution?” questions Schirrmeister. “Many of them could probably be addressed using divide and conquer, so do you actually have enough problems that you can only find by simulating these pieces together?”

Brand believes that those issues are growing. “You may have a clear boundary on an ECU and it is easier to verify that, but the problems occur when you start putting them together. The interfaces, how they talk, and how they fit together — that is where we see the most bugs. Those parts come from different teams and they are relying on specifications, but it is not until you bring them together that you find the issues. Interface visualization is a big one. Ensuring that the systems, at the communication layer, are operating correctly.”

In the past, EDA has struggled with system-level abstractions and models because the number of people who could extract value from them was small. But that number is growing, and the need is becoming greater. However, it is to be expected that EDA companies may wait to feel demand before investing heavily again.



3 comments

Karl Stevens says:

Yes indeed, interfaces is what it is all about. Unfortunately RTL is all about connecting registers together to do computation in a clocked system, so no wonder there are problems.

Interfaces are asynchronous and can best be defined by Boolean expressions and propagation delays.

I have my own simulator and syntax so I can see how the interfaces work.

Visual Studio CSharp has everything I needed for free.

Karl Stevens says:

Apparently no one in the hardware world can think of anything beyond RTL.
The tools(compiler/debugger) for software are also usable for hardware since software classes and objects correspond to hardware modules.
CSharp has bool types so Boolean expressions can be used for the condition in if statements or conditional assignments. Further there is no restrictions such as only allowing if statements in always blocks.
This is part of what is behind my previous comment. And I have running code to prove it. The following quote is very relevant:
Software has also accepted abstraction in the development flow. “In the software world, abstraction has worked brilliantly,” says Rupert Baines, CEO for UltraSoC. “We have gone from coding in binary, to machine coding, to high-level languages, and today we see them do amazing things with Haskell or Erlang or Hadoop or Cuda. These are astonishingly high-levels of abstraction, and we have done nothing similar for hardware. You have processor models and people do processor simulations and processor abstractions. They do some hardware abstraction. But we have not been able to generalize it in any sensible way.”

Kevin Cameron says:

It’s easy to come up with better ways to do things –

http://parallel.cc (async. C++ extensions for HW)

However, persuading anyone in IC design to change methodology seems to be a lost cause – self flagellation and moaning seems to be the preferred approach.

Leave a Reply


(Note: This name will be displayed publicly)