Taking Stock Of Models

SoC modeling is a multi-dimensional world. The industry is still ironing out the wrinkles.

popularity

By Ann Steffora Mutschler
The world of modeling in SoC design is multi-dimensional to say the least. One dimension contains the model creators and providers, while the other is comprised of the types of models that exist in the marketplace.

“What we’re seeing today is that we have basically models coming from either IP providers—the people that are actually producing those cores or STAR IPs are providing the models for them,” said Shabtay Matalon, ESL market development manager at Mentor Graphics. “We are seeing that EDA companies and tool vendors are providing models, and we’re also seeing various customers that are actually producing models. Sometimes they provide them either independently or as part of a platform to their own customers.”

In the other dimension are the types of models, whereby the most widely accepted type are considered loosely timed (LT) models, he said. Matalon believes that, as opposed to what to the barriers four or five years ago where models were a major obstacle for the use of ESL, today there is quite a wide supply of LT models coming from many sources.

One significant benefit of LT models is that they run fast. There are LT models that are used to model processors and CPUs and there are LT models that are used for other aspects like buses, memories and peripherals in a general view. These are the models used to create a functional representation of a design in a virtual prototype, early in the design cycle, way ahead of silicon. “We are seeing sometimes that those platforms are created a year or year and a half before silicon is available. The largest benefit is they are implementing an executable specification, which had been talked about in the industry for 20 years. Those models—specifically the processor models—are capable of running the exact software that is running on the end product, which is a huge benefit. And it does not compromise the software,” he said.

There is also a modeling use model, whereby more advanced engineering teams, as they move to hardware implementation, can re-use the LT models as reference models to implement verification using methodologies such as OVM and UVM.

Alongside LT models are approximately-timed (AT) models. “From a use-case perspective, what we are seeing is that maybe in some aspects our tools, which were created to address the problems of architectural design, were ahead of their time,” Matalon said. “There were very few that were using tools for architectural design. But now we are seeing that almost every customer that is using not only advanced cores like ARM’s Cortex A series but even microcontrollers—they have so much configurability in the cores and so many options from a performance perspective and a power perspective. They have so many options that trying to nail the architecture is becoming not just a problem that people can resolve on a spreadsheet. And that’s becoming a problem across many, many designs.”

The creation of these models in the domain of ESL today is driven by simulation requirements, noted Pranav Ashar, chief technology officer at Real Intent. “The simulation speed suffers as there is more detail in the model, so you need to abstract some of that out in order to get a simulation speed that is fast enough for the scale of design you’re doing. Also, because today’s designs are all system-level designs with processors, the simulation makes sense only if there is a corresponding simulation of the software that goes with it.”

Therefore, the simulation speed has to match the speed of the simulation software and has to be fast enough to handle the complexity of the amount of logic and is needed for both architectural exploration, as well as software development as the hardware is being developed, and to find software bugs.

“Starting with architectural exploration moving to software development, then onto supply chain enablement where you are actually giving it to ecosystem partners or customers to kick the tires or start their development early, [modeling] becomes a common language that unites the hardware/software teams together so they can actually—when you are early enough—make a difference to hardware design,” explained Nithya Ruff, director of product marketing for virtual prototyping solutions at Synopsys.
“And then you are actually testing real software stacks on the hardware design before it becomes hardened and before it goes into RTL. To me, the common language influence it can have on the hardware design, and the fact that you are working across the supply chain using a common set of models, is very, very powerful. It’s not meant to refine or validate hardware. It is really meant to provide a common language for the hardware/software teams to talk and work towards.”

But for others, modeling needs to be more than a common language. Jeff Scott, principal SoC architect at Open-Silicon—who has many years under his belt using ESL tools to develop system-level models of ARM-based SoCs—said models are either cycle accurate or tend to be more abstracted and more suited to use in a virtual prototyping space as opposed to the architecture exploration, performance and power consumption analysis space, which is his focus.

This comes down to the accuracy of the model, which is a big issue, he acknowledged. “If you look at Accellera standard—TLM-2.0 that talks about timed and approximately-timed models—they don’t talk about cycle accuracy. Today it’s a big tradeoff between simulation speed versus accuracy. If you’re doing a cycle-approximate, or a loosely timed or even approximately timed model, you can do some general relative tradeoffs between one architecture and another from a performance standpoint, but you’re likely to miss something that will show up in the hardware and might give you a bit of a false sense of security if you count too much on that. If you looking at high-level issues from an architecture perspective, it might be okay, but if you are really concerned about the model in some measurable way being analogous to the real performance of the hardware, you need to look more towards cycle accuracy.”

This is particularly essential in the mobile market, where battery life and performance of certain functions is critical. If booting up a display takes too long, or the battery only lasts six hours, then the user will be frustrated.

“So you find yourself trying to architect an SoC a year before it gets developed into a product, trying to properly dimension a processor, the clock speed, the data width, the memory caches, etc., so that you get enough performance but don’t over-dimension and consume too much power so you’re competitive,” said Scott. “It’s really hard to do when you’re talking about having multicore processors, sharing memories with graphics processors and high speed serial devices and other things that exist in tablets and smartphones these days. A static spreadsheet analysis just doesn’t give you the interactive analysis between different data flows and different processor activities going through the shared resources to show you where your problems may be. So having a system model running a simulation of a real use case gives you a lot more insight as to what’s going on.”

Missing links
When discussing architectural analysis, power must be brought to the table.

Modeling power with enough accuracy has always been a challenge, but power estimation is not necessarily for the purpose of getting an absolute measure of what the chip power is going to be, Ashar noted. “If the libraries you have, if the power models you have are good enough to give you a relative feel of one architecture choice versus another, then that’s good enough to get you to a chip plan. Basically you need to make tradeoffs between latency and throughput, and parallel versus sequential algorithms, and so on. Architectural simulation is probably good enough to give you a relative comparison in terms of power.”

Power optimization, of course, is crucial. “When you’re optimizing for power, power intent verification becomes important because suddenly your control flow has become more complicated and you’ve got to manage these multiplicative effects of the various components being on/off, standby, and so on. A lot of these are these interblock verification requirements in terms of power intent are clearer at the architectural level. The translation of those verification requirements into the RTL is difficult and basically there are no handles at the RT level to write those kinds of assertions,” he explained.

Ashar said that to create that kind of verification obligation at the architectural level and have the refinement process and carry it forward, either in an automatic manner or at least by giving the verification engineer at the RT level some handles by means of which they can translate those obligations to the RT level, is a missing link. “Fixing that will be a significant step forward in terms of making architectural exploration and starting the design at the architectural level a lot more palatable.”

Open-Silicon’s Scott said the types of simulations he’s running right now primarily are for architectural exploration and analysis, and he is only implementing a portion of the SoC. The full complexity of the SoC is outstripping the capacity of the simulation platform, which includes the models, the simulator and the compute platform that it runs on. “I’m trying to do everything cycle-accurate, and it’s akin to bringing in the RTL and trying to simulate everything and trying to do a long enough use case to see something happen over frames of display updates over a long period of time.”

Trying to do it all is still a pretty big challenge, he stressed. “Some tools aren’t there yet to make it really easy to pull together an entire SoC and all the peripherals you need to initiate, terminate, protocol transactions in your peripherals easily and let the architect worry more about what’s the use case, how much data needs to come out, what does my CPU workload look like, how much traffic does it generate toward the memory subsystem and what interconnect cache system, memory subsystem makes sense for my application—that’s really what I need to worry about.”

In addition, what also needs to be addressed is how to make entry into virtual prototyping easier to adopt rather than a big bang, where the engineering team has to make a big investment in people and modeling and to learn how to do System C, noted Synopsys’ Ruff. “That, to me, has always been the big barrier to entry. Some of the more immediate things we are doing involve simplified model creation tools. That’s helped a lot. That’s helped productivity both for novices as well as experts because we have methodologies, we have input tools that can take in specifications in any form they come in. It takes them from what they are already doing rather than have them do one more step.”



Leave a Reply


(Note: This name will be displayed publicly)