Poised For Aspect-Oriented Design?

Could the adoption of aspect-oriented design practices lead us to better designs more quickly? Initial forays with these techniques in verification were not completely successful.


In 1992, Yoav Hollander had the idea to take a software programming discipline called aspect-oriented programming (AOP) and apply it to the verification of hardware. Those concepts were incorporated into the e language and Verisity was formed to commercialize it.

Hollander had seen that using object-oriented (OO) techniques would allow the abstraction of the testbench to be raised above the register transfer level (RTL). Adding aspect-oriented capabilities would provide even more flexible into the testbenches and would allow for RTL patches to be handled as modification to the testbench rather than by rewriting the testbench.

At the time, e was a significant improvement to the way testbenches were constructed, and while still being supported by Cadence, its usage has been in decline since SystemVerilog and the Universal Verification Methodology (UVM) became available. Can some of the lessons learned from verification help with improving the design flow?

Today, many designs are derivatives of a platform. That platform could have been verified and resulted in fully functional silicon. If a new peripheral block is added, how much of the code for the system has to change? How much has to be re-verified? Perhaps the design has stayed the same, but a lower power version is being created. It would be nice to not have to change any of the fully verified RTL.

“AOP is about being able to add functionality to existing code without reaching into the code,” explains Tom Fitzpatrick, verification technologist at Mentor Graphics. This is exactly where the industry would like to be. But does it make sense?

Drew Wingard, chief technology officer at Sonics, takes a step back to look at abstract modeling for the hardware. “Within the on-chip network (OCN) space the network starts by looking like an address map, which tells the abstract model which hardware resource it is talking to and what address within that resources is being accessed. The most important part is the performance aspect of those systems. A substantial part of the cost/performance benefits come from sharing expensive resources such as off-chip memory. The ability to share that has everything to do with making sure the components in the system can achieve the necessary performance levels. “

Wingard observed there was more synergy between the architects writing the performance models and the verification guys who were trying to abstract the underlying hardware than with the actual hardware designers themselves. This perhaps indicates that AOP may be more useful at the higher levels of abstraction than for any change in the existing flow associated with RTL design.

“When we look at performance analysis you need an idea of a use model and from that we can derive an operating scenario,” continues Wingard. “For a given SoC there tend to be the set of the most important use cases that the design team is considering for the device. This focuses you on making sure you can appropriately size and configured memory and the network necessary to satisfy the throughput requirements of the cores.”

Is power similar? Could we use AOP to specify the power aspect of a design and thus be able to implement these features without modifying the functionality of the design? Wingard sees a similarity and further synergy. “We have a set of use cases from which we derive a set of scenarios and we want to make sure our power usage is appropriate across those. What is the synergy between these and the ones being used for performance? There is about an 80% overlap. There is a lot of synergy between the abstract description of these scenarios for optimizing network topology, memory system characteristics for performance and for optimizing the domain partitioning for power.”

So why is the industry not ready to quickly jump on AOP?

Power can have problems
“The problem with AOP is that while powerful for certain types of application, it is also very dangerous,” says Janick Bergeron, fellow at Synopsys and a significant user of the e language while he worked at Qualis, a company Synopsys acquired in 2003. “As an IP provider I want to know what is ultimately getting executed. Anyone could muck with my code, replace it, extend it, add to it in ways that I could not control. This makes support a nightmare. You need to know the exact code, the exact order in which it was loaded to reproduce a problem.”

Mentor’s Fitzpatrick agrees. “The main complaint about AOP is that you have to be very careful or you can get into trouble. The eRM was a way to help them avoid the problems. Since then, similar capabilities have been brought into the UVM. UVM focuses on the OO part where you can extend a class to create a new class and swap them in using the factory. It is a lot easier to keep track of what you finally end up with. It may take a little more code, but the control that it gives you without sacrificing much flexibility is advantageous.”

Bergeron also sees problems with the scalability of AOP. “When the project was a small IP core, AOP worked okay, but as the design grew larger, it became a management nightmare. When you can keep it all in your head it is fine, but beyond that …”

Agile and AOP?
Agile is another technique receiving a fair amount of attention today. Agile replaces the established waterfall type of development methodology with an incremental approach. It would appear that AOP complements Agile.

“Aspects allow you to making fixes and iteration that Agile calls for and you can do it from the outside,” says Bergeron. “So do I change my code base or extend it? Do I incur the technical debt now or later?” Bergeron defines technical debt as a term used to describe the inertia or entropy that is accumulated by taking short cuts. “You have to pay up on this eventually.”

So while Bergeron sees a possible synergy, he is unconvinced about its applicability. He believes that AOP incurs a high debt but that “agile appears to be about incurring as little debt as possible. AOP is great for writing tests because this is the last thing that you write. Anything that is infrastructure or meant to be reusable, we would recommend that people stay away from it because you cannot control it or protect it.”

Other Ways
The question becomes how to gain the advantages without incurring the technical debt. “The fact that AOP is meant to group together all of the code related to a cross-cutting concern is interesting and you can do that with pure object-oriented programming by putting all of your class extensions and callbacks within a single file, explains Bergeron. “It is a matter of organization and methodology.”

No questions there from Fitzpatrick: “We have done similar things within SystemVerilog and UVM with the factory.”

AOP in design
Bergeron provides a hypothesis for AOP in design. “AOP may be relevant for the design side because none of the design side has the capability to be object oriented. It is an inherently structural thing. If you take a module that implements a design and you want to add functionality to it, you can’t extend the module. There are no virtual things on the module that could be extended. I have to modify the file. By doing that I create a new module and now I have to modify the environment to verify that I have maintained the previous functionality and added the new functionality. What if I could define aspects on the module to add a new function, to add some ports, some new functionality? The original module is still there and the original functionality can still be verified but you only need to add the testbench for the new function.”

Fitzpatrick is not in a hurry to follow down that path. “In the design community, hardware people don’t tend to think even in OO terms let alone AOP terms and are more likely to end up with spaghetti code. I would be hesitant to go to a design team and say we have the ability to use AOP for your RTL design.”

But it would appear as if the adoption of the Unified Power Format (UPF) is defining an aspect of the design – its power profile. “We do need to define if AOP means everything is within a single programming language,” says Bergeron. “The execution semantics are defined by adding the aspects but power does not affect execution semantics and thus is only a different view of the execution. It will affect the final implementation – so from an implementation point of view, they are aspects but unlike AOP, you need not be using the same language.”

“UPF is a kind of aspect oriented design,” says Anand Iyer, director of product marketing for Calypto. However, he feels that we do not have the tools necessary to be able to use it in this way. “The challenge of putting various aspects together is in the hands of downstream tools. Many of these are late in the process and determining how they interact means you have the necessary performance analysis tools and those are only available late in the game.”

Fitzpatrick believes that it is something else. “People do seem to be comfortable with the idea that you have the ability to specify orthogonal functionality to the functional design – so having this separate thing that is placed on top of their RTL that can reach in and massage things a bit may be acceptable.”

Still, Fitzpatrick is concerned about the implementation of such things and would not apply the full AOP model. “From a coding standpoint, I prefer being able to see everything in one of two places – either in the module definition or the UPF specification for that module. You know where to look for any piece of functionality and don’t have to worry about which other pieces of code my compiler may happen to link in to modify stuff that I was happy with.”

Fitzpatrick can see advantages in this. “You really want to separate the logical functionality, which is what we have traditionally had to worry about with verification, from the additional functionality that adds more complexity to the verification problem. Also, the ability to verify the power aspects of a design as a standalone thing makes sense and you can make sure that it is coherent before you try and check it out on top of the functionality in the RTL. This divides the problem and allows you to consider them separately, but also to look at them together, which is very powerful.”

Not everyone sees UPF as part of a top-down process. “UPF is an abstraction and there is a benefit from doing the design of the power control hardware, taking the resulting partition of the design netlist that meets the physical requirements of the flow and then automatically creating a UPF representation that matches it,” says Wingard, who sees UPF as a way to document choices. “We all want IP to come with a constraint UPF that describes the underlying capabilities of the block for power control. At the integration level we need to know what am I going to pay for it, how many domains can I afford? Should I wrap a power isolation ring around each of the blocks? How can I reduce the total number of power domains and states to a manageable subset for implementation? That is communicated to the flow by some incremental UPF. As we develop power controllers for the domains we need to make sure they are consistent with the UPF that describes them.”

A path to the future
Many years ago, Gary Smith and I had a debate lasting several years about the adoption of ESL. While Smith concentrated on design tools, such as High Level Synthesis, I always maintained that none of them would be fully successful until the modeling and verification of the abstract design had been solved. There appeared to be too many open questions about what needed to be verified and how it should be verified. Without this, the value of ESL has always been questioned and the design flows uncertain.

“The challenge for ESL has been to get the ROI to a point where the chip architects are willing to invest in some tooling,” concurs Wingard. Today, most SoC architects use spreadsheets, but he sees light at the end of the tunnel. “If we have information that can be captured about scenarios that can be used in multiple ways – that can be used to drive a good model for the system performance, to drive power architecting and enabling tradeoffs between those areas, then there is more likelihood that they will get adopted.”

The Accellera Portable Stimulus Working Group efforts may result in better ways in which the scenarios can be defined and that in turn could drive the verification and design methodologies of the future.

  • Simon

    A lot of the expert opinions in this article reflect a severe lack of recent experience with AOP – especially with verifying large modern SOC’s. Personally, I would be embarrassed to discuss a technology in 2015 that I last used in 2003.

    I’d like to address the objections to AOP stated in the article:

    1) Encapsulation is fully supported (public, private, protected, etc…) so “mucking” with code isn’t a real issue. As an IP provider, you can also encrypt your code or define it as un-extendable.

    2) Along with basic coding guidelines, UVM-e is significantly more robust than UVM-SV and provides all the direction required to not “get into trouble”. Additionally, no macros are required as every feature required by UVM is already built into the language. In my experience this results in almost half the code as UVM-SV. Every line of code is a chance to write a bug, so this is a major advantage.

    3) We have projects that very successfully scale UVM-e for full-chip verification. Similar projects that use UVM-SV fail to scale beyond the subsystem level. While our UVM-e projects are happy to use any available emulation, our SV projects absolutely require emulation to verify their system.

    4) Our HLS designs can easily be verified by UVM-e. The methodology and feature set is agnostic of the underlying design language. I can use the exact same testbench for my System-C design and the generated RTL. All I have to change is a single file declaring signal names and turn on my timing specific checks. I couldn’t do this very easily with UVM-SV, and even if I did, the performance of the testbench would drastically reduce my productivity in System-C.

    The real reason the industry has not jumped on AOP is because only Cadence supports it well. If either Synopsys or Mentor had a viable AOP solution, they would quickly promote it as a significant improvement over UVM-SV. Instead, all I see from them is highly ignorant FUD.

  • Blake

    Greetings Brian. I would like to offer a counter example to ->
    “When the project was a small IP core, AOP worked okay, but as
    the design grew larger, it became a management nightmare”

    We (the proud https://en.wikipedia.org/wiki/Xeon_Phi team)
    have been quite successful in exploiting AOP (via UVM-e) for very large SoC’s. Have enjoyed a continuously evolving/improving code base for almost 10 years.

    If AOP was available for design to the extent that it is available for validation, I think it would revolutionize the entire industry.

  • efrat

    Interesting notion – “AOP in the service of Agile”.

    Indeed one of the things people like about e is the simplicity of providing/getting patches. Usually, once you know where the bug is – you can easily create a patch to be loaded on top.

    Same goes for adding capabilities. You think of a new idea, you write some code, load it on top of a compiled environment, play with it. You can ask others to try it out without asking them to spend too much time on recompiling their environment – “just load it and let me know how it works”. The mere fact that you can load and top and do not have to recompile the environment make you agile…

    About AOP for design – I find this a cool idea for the future of design methodology. AOP saves lots of time, it will be great if designers can enjoy it as well.

    • Blake

      Yep!! Would kill to be able to patch design code without modifying code base (from a database management alone point of view). Is one thing to hack in a quick change, and quite another manage many changes. This is often the case for large scale integration efforts. The mods need to be maintained until subsequent revisions are provided. When revisions come, very likely merge conflicts with local fixes will occur. Add in the magical ‘e’ trick of loading an interpreted patch on top of a compiled binary (no rebuild required), and you really have a revolution. If the same could be done for RTL, patch build turnaround times would be effectively 0. This would then allow for multiple turns per day vs 1 (for greater builds that take 8 hours or more).

      Back to theme of blog “Could the adoption of aspect-oriented design practices lead us to better designs more quickly?” -> yes, yes, yes, here is all my money, yes!

  • Yoav Hollander

    My intuition is that AOP could be quite helpful in design as well. While AOP can be used for adding quick patches (which should later, ideally, be merged into the original code), its main use is for maintaining separation of concerns (perhaps like the power example discussed in the original article).

    In this latter case, using AOP does not incur technical dept. In fact, one could argue that without AOP, one needs to manually merge the different concerns into one piece of code, making the result less maintainable.

    If you agree that this style of separation of concerns is indeed helpful, then it is probably a good idea to use built-in AOP (as in e, supported by access control directives like “private” and “package”) rather than using the duller knives of callbacks and factory methods.

    Finally, my intuition is that to for fighting design complexity it is perhaps best to use a combination of AOP and a strong language-definition-language (as in e). This way, one can (judiciously) define new domain-specific statements which, when instantiated, add aspect code to existing constructs.

    For a discussion of how AOP and language extensibility can be helpful for a different area (spec verification) see the end of my (long!) post: https://blog.foretellix.com/2015/07/28/its-the-spec-bugs-that-kill-you/

    • Blake

      Greetings Yoav!

      Difficult for me to express just how much I agree that “this style of separation of concerns is indeed helpful” other than to say that it has enabled me to accomplish things that I simply could not have imagined without it. Having said that, not all are willing or are not able to make that leap. To make matters worse, negative rhetoric (like below) makes it even more difficult for this leap to be taken.

      “The problem with AOP is that while powerful for
      certain types of application, it is also very dangerous”
      “Anyone could muck with my code, replace it, extend
      it, add to it in ways that I could not control”
      “The main complaint about AOP is that you have to be
      very careful or you can get into trouble”
      “When the project was a small IP core, AOP worked
      okay, but as the design grew larger, it became a management nightmare”

      Where I have really seen the light bulb go off in the eyes of a disbeliever is when they are shown what can be done with patching. After they pick themselves off the floor, the dicussion (and ultimate embrace) of managing cross cutting concerns through AOP can begin.

      Somewhat related. Whenever I find myself thinking about you and the ‘e’ language, this funny Henry Ford quote pops into my head.

      “If I had asked people what they wanted, they would have said faster horses”

      • Yoav Hollander

        Thanks for the kind words 😉

        BTW, while I feel fairly certain that AOP-for-design is a good idea, I am not completely sure about the specifics. For instance, Wolfgang Roesner of IBM and myself have been discussing this for ages (Brian, perhaps you should try to interview him). When we last met (at HVC’14) he said they ended up creating their own aspect-oriented weavers (for reset, power, thermal, debug, recovery, DFT etc.).

        • Blake

          Understood. What I would really like to see is closer to an ‘e’ based variant of High-Level Synthesis from C/C++ to RTL.

          What I really really want (and a little off topic) is ‘e’ for the masses. From my perspective, ‘e’ is the most natural language I have ever encountered. Once available as a general purpose language, AOP would become a part of our collective mindset. “The Tyranny of the Dominant Model Decomposition” would be a thing of the past! AIl human beings on the planet could code in a natural and intuitive fashion. This would lead to ….. Cutting myself off. Maybe better discussed in your blog. Let me know if any interest in this line of thought, and we can pick it there.