The Trouble With Abstractions

With all of the data needed for system tradeoffs coming from downstream, how well is design abstraction really working?


Ask chip engineers about the value of abstractions and you’re likely to get a spectrum of answers. While abstractions help in seeing the big picture on complex designs, the data for performance and power needs to be annotated from detailed information the engineering team may obtain later in the design flow.

There is valuable information that can come from using abstractions correctly. And there is plenty of wrong information and wasted effort when they are done wrong. Moreover, engineering teams need to understand what can be done with abstracted models versus what needs to be done in more detail to more precisely weigh accuracy/speed/effort tradeoffs.

“The larger the system, the more abstraction is required,” Jon McDonald, technical marketing engineer for the design and creation business at Mentor Graphics. “At the same time, there will pieces of the system where various types of analysis will need to be more detailed models. And the challenge is this is more of an art, still, because there are so many choices for the levels that you can model, all the different elements in the system. What level you model at needs to be driven by the kind of analysis you’re trying to do on the portion of the design you’re most concerned with.”

Still, no one wants to forgo abstractions in designs because the prospect of designing without them is much worse than the challenges of living with them.

“As human beings we can’t live without abstraction,” said Drew Wingard, CTO of Sonics. “This stuff is so complicated, where would we be if we couldn’t abstract it? Do we really want to go back to thinking about our chips at the level of the rectangles that are going to exist on every one of the mask layers? There’s no way. Abstraction is not dead. However, we need more effective ways of feeding information that would normally be learned later in the design process so we can make better decisions today.”

Frank Schirrmeister, senior group director for product management in the system and verification group at Cadence, agreed: “The simple answer is no, abstraction is not dead. But when using abstraction, users need to be really aware what type of decisions they can and cannot make at each step.”

He pointed out that from every level in the design process, it means something different. For the layout/implementation engineers, abstraction is everything that doesn’t have gates and detailed layout information, which isn’t derived from a .lib file. For the RTL engineers, abstraction is whatever you do higher up at the SystemC level before you have RTL. Then, for the SystemC engineers, abstraction is whatever the requirement was in the functional description.

But the understanding of abstraction really depends on the use case, which is where sometimes people mix things up because for a particular use case a specific type of information is needed.

“The notion that one style of model or one abstraction can bring you everything you need — that has turned out to be really hard,” said Tom De Schutter, senior product marketing manager for Virtualizer Solutions at Synopsys. You cannot have models that are really fast, but that have all the information, to then also give you performance and power. That’s why we clearly separated, at a top level, the software development task and the more architectural exploration task where power and performance come into play. Underneath those top levels there are still different use cases where those, at a high level, really define a lot of the abstraction. If you look at the software development use case, that’s where functional models as defined by the standards with TLM 2.0 LT (Loosely Timed) has proven now to be quite successful. IP vendors like ourselves and like ARM are bringing out models that focus on being functionally accurate and providing maximum simulation speed because of the software use case.”

De Schutter noted that the architectural exploration side is very different because it requires information related to what is trying to be done and to the tradeoffs being made.

“The annotation of abstracted models is of course how you do this stuff,” Wingard explained. “If we always started with a blank sheet of paper when we were doing a new design, the bar would be really high to do this, but we never start with a blank sheet of paper. We’re almost always starting with something that is based substantially on something we’ve done before, so we typically have more detail than what we need. We have more information than what we need about how that old thing worked.”

The challenge is in pulling the right sets of information out of that old thing that helps us make good decisions, he said. “Sometimes that data helps us focus on where we should spend time in this next design. A characterization of where the power went in the last design can help us highlight those areas where we might want to invest more time to save power in the next design. Or, if we’re worried about the performance characteristics and in this next design, we expect to be able to bring in the next generation DRAM technology or the next speed grade. Do I have enough DRAM bandwidth to satisfy that requirement? Of course we’re going to go look at our characterization from the most recent design, and say, ‘This part doesn’t change; this part does change so I’m going to assume I just scale it to change it.’ I’m not going to ignore the old dataset. Maybe there’s a new video processing block that’s being designed to do something new, but it’s probably a derivative of the one we had before. So from a memory system performance perspective, all we really need to do is estimate how much more traffic it needs from memory. We understand the shape of the traffic; it’s just the quantity that’s uncertain. We can take the information from the old chip and scale it, and that’s going to be probably good enough to get us to the decisions we need to make early in the process.”

The goal of this abstraction is to help the architect understand where to focus efforts and which choices make a difference that matters.

“The best system designers I’ve ever had the pleasure of meeting or working with were really good at abstraction,” Wingard said. “They did not believe in flat at all. They always thought in terms of hierarchies of stuff, and they were incredibly good at identifying which parts of the system were the parts where they should spend their time, and they were able to tunnel incredibly deeply through those hierarchies to get to the relevant information, to get to the relevant decisions, to evaluate the relevant tradeoffs associated with what they were trying to optimize. I don’t think that changes. They were practicing ESL without the tools, and we still talk to people every week whose favorite system design tool is the Excel spreadsheet. What are you analyzing at that level of the design? If it boils down to these calculations that look like algebra, Excel’s a pretty darn good way of running algebra.”

The big challenge is granularity. Do the components in a design show up at different abstraction views clearly enough to simulate, analyze or model?

“If I’m going to try to plug it into some tool environment that has a particular model by which it calculates the power, then I don’t just have to come up with a number for this thing, I have to come up with an equation for this thing that works with this model,” he said. “The question is, where is the return on the architect’s investment? Do they get a positive return by making that investment? It’s not about the price of the tool, it’s about how much effort does it afford them to create these models.”

Toolmakers insist at least part of the problem—and one that has plagued verification for years—is fully understanding how to use tools effectively. “You need to force people to read the instruction manual,” said Schirrmeister. “You need to force them to read what the model can actually be used for, and what it cannot.”

The example here is the notion of virtual platform models at a higher level, he said. “There was always the discussion of, ‘I need it to be fast because I want it to execute software. I need it to be accurate.’ But why does it need to be accurate? It comes down to a CYA mentality because if you talk to the engineer in the CAD team, they will tell you it needs to be 100% accurate with the implementation at the next level down. But then it is no longer an abstraction. The CAD team member will be very concerned that the product team using a model will use it in a way that is unintended, and will derive a conclusion that is valid with the data they were given but because they didn’t read the instruction manual of how to use this model, they will get into trouble because the model didn’t have the right fidelity to give that power or that performance information.”

So while experts agree that abstraction is necessary, and definitely not dead, they do agree it doesn’t always yield optimal results for a variety of reasons. Moreover, the value of abstraction—if done right—increases with each new process node.

But abstractions do have limits. “You can’t say, ‘I need full detail on everything in my entire system,'” noted Mentor’s McDonald. “You’re never going to get it, the simulation would never complete, and it’s just unrealistic. But, to say, ‘I’ve got this critical breaking module that I need very detailed latency analysis on, but I’ve got other portions of the system that I may have very abstract models of—the motion of the vehicle, the user inputs, the interaction of other ECUs in the system, for example, on an automotive platform — I may be able to have very high-level models of the rest of those elements, where one element is very detailed, and that is very valuable. I can do some very useful analysis there. I can model that at various levels of abstraction. I can do the simulation. It runs fairly quickly if I’m only modeling what I’m interested in, and the details of the things I’m interested in — then we have a viable solution.”