Behavioral modeling remains in limbo as confusion about making levels of abstraction interoperate continues, but there is still room for automation.
As engineering teams raise the vision of their design to a higher level of abstraction, the use of behavioral modeling is growing. While not ubiquitous, the concepts are gelling, which at least is helping the industry discuss the technology more intelligently and determine where automation makes sense.
One of the biggest concerns with behavioral modeling is what engineering teams want to do with the models and when they should do it.
“If we are modeling behavior that hasn’t been designed yet, which is an important class of behavioral modeling — it’s when we are developing algorithms and things like that — I don’t think we can automate the modeling directly,” said Drew Wingard, CTO of Sonics. “There are, of course, a lot of modeling frameworks that have existed for a long time. Some people like to use things like MATLAB and Mathmatica, and there’s all kinds of other stuff that’s been around that’s very domain specific for that kind of algorithmic modeling. You could call those automation tools. But what you’re doing in those environments is you are designing your algorithm.”
However, the bigger issue today in behavioral modeling has more to do with users saying they have already built something and they have a reason to do behavioral modeling for it. “So they would be looking at the automated extraction of a behavioral model,” Wingard, explained. “There, the answer is that there are some techniques, but I don’t think they are very attractive. One approach is to take, for instance, the RTL of a hardware block and put it through a translator that turns it into C or C++. The speedups that are available from that are pretty boring — not fast enough to do very interesting things in software and things like that. So, I don’t think automation is a likely approach. However, behavioral modeling is very important, even after you build something.”
Categories of modeling
Behavioral modeling falls into two general categories—functional and performance modeling. On the functional modeling side, the industry has been working on virtual platform as the be-all for quite some time now.
“Typically, the goal of the virtual platform model is to allow software developers to do hardware-dependent software design, and so it’s okay that those models run without a strong concept of time. It’s okay that those models really run at a level of abstraction that people often call the bit-accurate level. In other words, if I write to these bits, I expect those values to show up in these bits, but I don’t really know what the order is going to be, I don’t know when it’s going to happen. That works well for a good class of systems. Those models tend to be used mostly by software people. The number of customers I’ve seen using that kind of modeling as part of their base design is not very common because typically they don’t have questions from a design perspective that those models are designed to solve.”
On the performance modeling side, Wingard pointed out that one of the bigger challenges in most SoC designs is the sharing of critical resources—especially off-chip memory. For this, it is essential to understand whether the end goals of the chip being built can be met given a set of cores wrapped around a real memory system. Here, it is very desirable to do the modeling of that at a behavioral level because the run times to figure this out would be really bad if it had to be done it at a very detailed hardware level.
“It turns out to be the case that’s it’s relatively easy to abstract the behavior of some very complex subsystems as just a set of timed traffic that are headed towards memory. [This is] very valuable if those traffic streams have realistic address patterns, because the time domain behavior of things like DRAM systems is very dependent on the address patterns. We’ve found that very abstract behavioral modeling of the IP subsystems makes sense except for the network and memory system. There, we don’t believe there is a good substitute for running at a cycle accurate level of abstraction,” he explained.
This doesn’t mean it has to be very slow, because looking at a full SoC design and considering how much of it is able to be black-boxed with these behavioral models, it turns out that the network and memory subsystem don’t constitute a very large piece of the design, Wingard explained. “Normally you could run that using the techniques where you take the RTL and you translate it into a faster model. We also find it very practical in our system because we’ve gone ahead as part of our verification efforts and built the cycle-accurate SystemC reference model for our IP. We can model the network and memory system at a cycle-accurate level in a model that runs more quickly because it was built natively and in a cycle-accurate System C manner using the OSCI TLM-2.0 standard, we find that the run times on that are quite attractive for doing this kind of performance modeling.”
Kurt Shuler, vice president of marketing at Arteris, noted a similar approach. The modeling is built in so when something is highly configurable, modeling is very important. You don’t know what you need to do unless you know what the effects of it are going to be. “The only way you really know that with complicated technology is by modeling it. There’s no algorithmic way to say if I pull Lever X, Thing Y will happen. The only way you know it is by simulating it.”
And this is an iterative process, he said. “When you’re defining something — these interconnects might hook up 200 blocks of IP, so there’s a huge interconnect on a chip — and they say, ‘What if I do this type of topology, or what if I put some buffers here, or what if I change some of the quality of service stuff, what is the effect when I run a certain type of load?’ We have these models that are functionally accurate but partially timed so they’re not 100% accurate. They’re not cycle-accurate but they run fast enough that you could do that iteration without having to go and drink 10 cups of coffee while you’re waiting for your answers. And that gets you part of the way there and helps you converge. Toward the end of the process when you really need to see how something is going to work, that’s when you’re going to use cycle-accurate models or slap this thing onto an FPGA.”
And this is totally automated, Shuler said. “When you’re creating an interconnect, under the hood what you’re also creating is ultimately the RTL. But you’re also creating the less-than-cycle-accurate model, you’re creating the cycle-accurate model, you’re creating the verification testbench, and UVM/OVM/VMM stuff. All of that is being put in there and it’s a pretty complicated thing under the hood. It’s pretty simple for the customer. It’s just done, but you’d be surprised that there are people who offer IP that is pretty configurable that you don’t have that. If you’re offering something configurable — and interconnect is probably the most configurable part of a chip — I don’t think there’s any way to know how to configure it unless you have that built in, automated modeling. I just don’t see how it’s possible.”
Further, he observed, what’s really difficult is navigating between levels of abstraction in the specification. “There are cases where the levels of abstraction in a design are mismatched and someone has to figure out how to put them all together.”
To this point, while work is being done, Frank Schirrmeister, group director, product marketing for System Development Suite at Cadence, summed up the current state of behavioral modeling as, “It’s still disconnected.”
He pointed to a recent panel at an industry event that discussed whether the behavioral modeling and even virtual modeling in high-level synthesis get to using the same models. “The answer is we are still not there yet. That means even fairly detailed models in comparison, the virtual platforms are still disconnected from the implementation so if I go to pure functional and behavioral models above that like the UML-ish type of thing, they are still not connected to the implementation flow but what I would look out for goes to graph based techniques. You’re using some of those modeling techniques to define those scenarios, to define those top level tests so the things you do at that level lend itself very naturally to things like message sequence charts from UML which basically say, in order for this to come up, Block A needs to wake up Blocks C and D, and Block B needs to wait until C and D have talked. So you define this with things like message sequence charts in UML. You also have in UML already descriptions like use case descriptions, use case diagrams defining how different blocks with their in and outputs, the actions they can take, how they interact and how they are used, and the constraints for those blocks.”
Schirrmeister believes there will be reuse of some of those behavioral modeling techniques for verification beyond just becoming a test environment for the chip. The whole graph-based discussion goes into that direction as far as defining the tests and how uses cases are looked at.
Leave a Reply