Could the adoption of aspect-oriented design practices lead us to better designs more quickly? Initial forays with these techniques in verification were not completely successful.
In 1992, Yoav Hollander had the idea to take a software programming discipline called aspect-oriented programming (AOP) and apply it to the verification of hardware. Those concepts were incorporated into the e language and Verisity was formed to commercialize it.
Hollander had seen that using object-oriented (OO) techniques would allow the abstraction of the testbench to be raised above the register transfer level (RTL). Adding aspect-oriented capabilities would provide even more flexible into the testbenches and would allow for RTL patches to be handled as modification to the testbench rather than by rewriting the testbench.
At the time, e was a significant improvement to the way testbenches were constructed, and while still being supported by Cadence, its usage has been in decline since SystemVerilog and the Universal Verification Methodology (UVM) became available. Can some of the lessons learned from verification help with improving the design flow?
Today, many designs are derivatives of a platform. That platform could have been verified and resulted in fully functional silicon. If a new peripheral block is added, how much of the code for the system has to change? How much has to be re-verified? Perhaps the design has stayed the same, but a lower power version is being created. It would be nice to not have to change any of the fully verified RTL.
“AOP is about being able to add functionality to existing code without reaching into the code,” explains Tom Fitzpatrick, verification technologist at Mentor Graphics. This is exactly where the industry would like to be. But does it make sense?
Drew Wingard, chief technology officer at Sonics, takes a step back to look at abstract modeling for the hardware. “Within the on-chip network (OCN) space the network starts by looking like an address map, which tells the abstract model which hardware resource it is talking to and what address within that resources is being accessed. The most important part is the performance aspect of those systems. A substantial part of the cost/performance benefits come from sharing expensive resources such as off-chip memory. The ability to share that has everything to do with making sure the components in the system can achieve the necessary performance levels. “
Wingard observed there was more synergy between the architects writing the performance models and the verification guys who were trying to abstract the underlying hardware than with the actual hardware designers themselves. This perhaps indicates that AOP may be more useful at the higher levels of abstraction than for any change in the existing flow associated with RTL design.
“When we look at performance analysis you need an idea of a use model and from that we can derive an operating scenario,” continues Wingard. “For a given SoC there tend to be the set of the most important use cases that the design team is considering for the device. This focuses you on making sure you can appropriately size and configured memory and the network necessary to satisfy the throughput requirements of the cores.”
Is power similar? Could we use AOP to specify the power aspect of a design and thus be able to implement these features without modifying the functionality of the design? Wingard sees a similarity and further synergy. “We have a set of use cases from which we derive a set of scenarios and we want to make sure our power usage is appropriate across those. What is the synergy between these and the ones being used for performance? There is about an 80% overlap. There is a lot of synergy between the abstract description of these scenarios for optimizing network topology, memory system characteristics for performance and for optimizing the domain partitioning for power.”
So why is the industry not ready to quickly jump on AOP?
Power can have problems
“The problem with AOP is that while powerful for certain types of application, it is also very dangerous,” says Janick Bergeron, fellow at Synopsys and a significant user of the e language while he worked at Qualis, a company Synopsys acquired in 2003. “As an IP provider I want to know what is ultimately getting executed. Anyone could muck with my code, replace it, extend it, add to it in ways that I could not control. This makes support a nightmare. You need to know the exact code, the exact order in which it was loaded to reproduce a problem.”
Mentor’s Fitzpatrick agrees. “The main complaint about AOP is that you have to be very careful or you can get into trouble. The eRM was a way to help them avoid the problems. Since then, similar capabilities have been brought into the UVM. UVM focuses on the OO part where you can extend a class to create a new class and swap them in using the factory. It is a lot easier to keep track of what you finally end up with. It may take a little more code, but the control that it gives you without sacrificing much flexibility is advantageous.”
Bergeron also sees problems with the scalability of AOP. “When the project was a small IP core, AOP worked okay, but as the design grew larger, it became a management nightmare. When you can keep it all in your head it is fine, but beyond that …”
Agile and AOP?
Agile is another technique receiving a fair amount of attention today. Agile replaces the established waterfall type of development methodology with an incremental approach. It would appear that AOP complements Agile.
“Aspects allow you to making fixes and iteration that Agile calls for and you can do it from the outside,” says Bergeron. “So do I change my code base or extend it? Do I incur the technical debt now or later?” Bergeron defines technical debt as a term used to describe the inertia or entropy that is accumulated by taking short cuts. “You have to pay up on this eventually.”
So while Bergeron sees a possible synergy, he is unconvinced about its applicability. He believes that AOP incurs a high debt but that “agile appears to be about incurring as little debt as possible. AOP is great for writing tests because this is the last thing that you write. Anything that is infrastructure or meant to be reusable, we would recommend that people stay away from it because you cannot control it or protect it.”
The question becomes how to gain the advantages without incurring the technical debt. “The fact that AOP is meant to group together all of the code related to a cross-cutting concern is interesting and you can do that with pure object-oriented programming by putting all of your class extensions and callbacks within a single file, explains Bergeron. “It is a matter of organization and methodology.”
No questions there from Fitzpatrick: “We have done similar things within SystemVerilog and UVM with the factory.”
AOP in design
Bergeron provides a hypothesis for AOP in design. “AOP may be relevant for the design side because none of the design side has the capability to be object oriented. It is an inherently structural thing. If you take a module that implements a design and you want to add functionality to it, you can’t extend the module. There are no virtual things on the module that could be extended. I have to modify the file. By doing that I create a new module and now I have to modify the environment to verify that I have maintained the previous functionality and added the new functionality. What if I could define aspects on the module to add a new function, to add some ports, some new functionality? The original module is still there and the original functionality can still be verified but you only need to add the testbench for the new function.”
Fitzpatrick is not in a hurry to follow down that path. “In the design community, hardware people don’t tend to think even in OO terms let alone AOP terms and are more likely to end up with spaghetti code. I would be hesitant to go to a design team and say we have the ability to use AOP for your RTL design.”
But it would appear as if the adoption of the Unified Power Format (UPF) is defining an aspect of the design – its power profile. “We do need to define if AOP means everything is within a single programming language,” says Bergeron. “The execution semantics are defined by adding the aspects but power does not affect execution semantics and thus is only a different view of the execution. It will affect the final implementation – so from an implementation point of view, they are aspects but unlike AOP, you need not be using the same language.”
“UPF is a kind of aspect oriented design,” says Anand Iyer, director of product marketing for Calypto. However, he feels that we do not have the tools necessary to be able to use it in this way. “The challenge of putting various aspects together is in the hands of downstream tools. Many of these are late in the process and determining how they interact means you have the necessary performance analysis tools and those are only available late in the game.”
Fitzpatrick believes that it is something else. “People do seem to be comfortable with the idea that you have the ability to specify orthogonal functionality to the functional design – so having this separate thing that is placed on top of their RTL that can reach in and massage things a bit may be acceptable.”
Still, Fitzpatrick is concerned about the implementation of such things and would not apply the full AOP model. “From a coding standpoint, I prefer being able to see everything in one of two places – either in the module definition or the UPF specification for that module. You know where to look for any piece of functionality and don’t have to worry about which other pieces of code my compiler may happen to link in to modify stuff that I was happy with.”
Fitzpatrick can see advantages in this. “You really want to separate the logical functionality, which is what we have traditionally had to worry about with verification, from the additional functionality that adds more complexity to the verification problem. Also, the ability to verify the power aspects of a design as a standalone thing makes sense and you can make sure that it is coherent before you try and check it out on top of the functionality in the RTL. This divides the problem and allows you to consider them separately, but also to look at them together, which is very powerful.”
Not everyone sees UPF as part of a top-down process. “UPF is an abstraction and there is a benefit from doing the design of the power control hardware, taking the resulting partition of the design netlist that meets the physical requirements of the flow and then automatically creating a UPF representation that matches it,” says Wingard, who sees UPF as a way to document choices. “We all want IP to come with a constraint UPF that describes the underlying capabilities of the block for power control. At the integration level we need to know what am I going to pay for it, how many domains can I afford? Should I wrap a power isolation ring around each of the blocks? How can I reduce the total number of power domains and states to a manageable subset for implementation? That is communicated to the flow by some incremental UPF. As we develop power controllers for the domains we need to make sure they are consistent with the UPF that describes them.”
A path to the future
Many years ago, Gary Smith and I had a debate lasting several years about the adoption of ESL. While Smith concentrated on design tools, such as High Level Synthesis, I always maintained that none of them would be fully successful until the modeling and verification of the abstract design had been solved. There appeared to be too many open questions about what needed to be verified and how it should be verified. Without this, the value of ESL has always been questioned and the design flows uncertain.
“The challenge for ESL has been to get the ROI to a point where the chip architects are willing to invest in some tooling,” concurs Wingard. Today, most SoC architects use spreadsheets, but he sees light at the end of the tunnel. “If we have information that can be captured about scenarios that can be used in multiple ways – that can be used to drive a good model for the system performance, to drive power architecting and enabling tradeoffs between those areas, then there is more likelihood that they will get adopted.”
The Accellera Portable Stimulus Working Group efforts may result in better ways in which the scenarios can be defined and that in turn could drive the verification and design methodologies of the future.