Traversing The Abstraction Landscape

Tools and techniques mitigate the pain of moving between levels of abstraction; balancing between them for accuracy and abstraction.

popularity

By Ann Steffora Mutschler

Back in the early days of semiconductor design engineers could count the number of transistors on their chip with their own two eyes. They designed and worked at the same level of design abstraction when doing the timing analysis. Tools were SPICE-like, maybe abstracted with slightly simpler timing models than the SPICE-level transistor models.

Thanks to Moore’s Law, the number of transistors that can fit on a chip has grown to the billions, which obviously can’t be counted with the naked eye. But they also no longer scale with SPICE. Abstraction has been the way out by providing a higher-level view on the design.

“Clearly, even when I’m at gate level, I know I’m not getting the same accuracy that I would be getting at SPICE level, but if my models are good enough and it is close enough, I’m willing to take that slight hit to be able to do bigger designs,” noted Barry Pangrle, a solutions architect for low-power design at Mentor Graphics. “That’s the progression that we’ve gone from—transistors to gates. Then people were doing schematic capture and everything was gate-level models. Then we went to RTL and we started moving to RTL models. Now we are moving on into system and bigger components and functional blocks. At each level, we’re giving up some measure of accuracy—it’s just not going to be as detailed. It’s not going to be as fine-grained and the hope is though that we have enough information that we can make the decisions at that level of abstraction.”

The abstraction levels in use today were developed over a long period of time. They are well-defined because a huge amount of work was done in terms of both modeling, to make sure we can move between levels, and to ensure there is the appropriate level of detail to accomplish what needs to happen in that level.

“Today, we’ve tuned it and created enough modeling around it so we can get the information that we need out,” said Cary Chin, director of technical marketing for low-power solutions at Synopsys. “But I would say that the model isn’t general enough if we thought of some new use of these connections and voltages and expected it to give us the data that we wanted. Whereas if you did that all in SPICE, it likely would [provide the right data] because that’s one indication of the maturity of the model—whether you can use it for things that weren’t anticipated originally when you built the model.”

At the RTL level engineers synthesize down to a gate-level netlist so that they can bring in their gate level models, Pangrle said, with the hope that based on the information they get from those models, they can create something that’s going to be representative of what they need at the RTL level. “Now we’re looking at going one level beyond that and saying, ‘Okay, at the next level of abstraction what kind of information can we capture here?’ The tricky part is making sure that you still have the level of accuracy that you need to be able to make the types of design decisions that you’re going to rely on that information.”

But these levels of abstraction are not all fun and games. For engineering teams doing low-power designs, there are many challenges moving between these different design abstraction views, the biggest one between the RTL to gate because these two abstraction levels have too many big differences, explained Qi Wang, technical marketing group director for low power and mixed signal at Cadence. “On top of that, there is a lot of handshake of tools between those two levels.”

For example, he said an important aspect of low-power design is to gather activities. RTL simulation is run to collect activity, so all of the signal activity is annotated along with all the signal names. The engineer hopes to re-use that activity at the gate level, but the problem is the name seen at the RTL may not be the name seen at the gate level because the synthesis tool renames the files.

Power formats

In addition to this renaming, a lot of optimization can happen between the RTL and the gate level, which means that some signal may simply optimize out. Another possibility is that the logic may not optimize out but the representation can be changed, Wang said. “On the activity side, this is a flow challenge. The activity file you get for the RTL you hope you can re-use for the gate level, but many times you will find it is very difficult.”

Another kind of difficulty involved is with the power format, no matter what standard is, Wang noted. “The whole idea is that you describe your power intent in another file… If you write a power format file for RTL, which means it will be used for the RTL so all the names you refer to would be the RTL names. Now when you get to the gate level you hope you can use the same RTL level power intent because I want to keep my golden power intent through the design and verification flow.” But this will have the same problem as in the activity file.

To address this formal verification techniques can be used to indicate which RTL register names map to the corresponding flip flop on the netlist with a name-mapping file.

Then on the power intent side, he suggested the easiest way to deal with the renaming issue to have the synthesis tool write out a new power intent file, which automatically will reflect the name changes and the hierarchy ungrouping. When it comes to enabling the flow, however, the power intent written out by the synthesis must be equivalent to the original power intent, which is where power-aware equivalence checking tools are utilized to prove that the new power intent and the old power intent are equivalent.

Twenty years of hard labor

Traditionally, traversing levels of abstraction has been relatively straightforward—it’s just a lot of work. “If you look at the library modeling process that has evolved to go from kind of transistor level to gate level, things are very well defined today,” Chin said. “Libraries are super solid and vendors know how to characterize things even as the technology changes. That’s an example of a level of abstraction that’s pretty mature because over the last three, four or five generations of technology, we haven’t had to make major changes. There have been many, many little extensions and timing models and functionality and things like that but basically since we haven’t changed the fundamental design flow, the models and libraries have stayed pretty much the same, which is great.”

There have been similar advances in synthesis. “If you look at this between RTL and gate level, synthesis has changed a lot over that time, as well, but in general if you couple synthesis with verification tools and formal verification tools, things have actually grown nicely so that we still have very dependable flows that most people are still pretty happy with. You can push the button and trust what comes out at the other end. And as you recall, it took us 20 years to develop that level of trust,” he concluded.

Once the engineering community moves en masse to the system level, that 20 years could easily be duplicated.