Being Different Is Bad

The challenges associated with implementing model-based methodologies can be daunting especially for IP providers.

popularity

By Ann Steffora Mutschler
Today’s SoCs contain as much as 80% existing IP that either has been re-used from previous projects or obtained from a third party. Models are created of this hardware IP, as well as new portions of the design, in order to create a virtual prototype that allows the engineering team to see the complete system by running software and applications.

While this approach sounds straightforward, for the providers of the IP it’s not always that simple.

Leading semiconductor companies such as Intel, STMicroelectronics, Qualcomm, Texas Instruments, Freescale, Samsung and others were early adopters of system-level concepts and technologies as a way to keep their competitive edge and, as such, have their own internal flavors of TLM-2.0 and System C.

Kurt Shuler, vice president of marketing at Arteris, admitted this scenario has been stressful. “For the interconnect, it’s 100% configurable and it changes with every chip no matter what. So the only way [users] know if their changes will work is if they model it and our tools do the automatic modeling. What’s killing us is, for example, is that Qualcomm uses their own internal system-level environment, which according to them is SystemC compatible, but according to everybody else it isn’t. TI is in the same boat. Intel does the same kind of thing. So you’ve got all these big companies that are still doing their own SystemC environments—even though there is the standard, even though there is commercial IP that plugs into these things—they are still making the choice to do their own stuff. They’ve got these huge teams working on it, and it’s causing [stress to] anybody who sells IP to them.”

Why they won’t switch
“I think a lot of it is that you’ve got the people who do modeling and they’re used to the way they do modeling. Maybe they have to make some changes to get it to work in an industry-standard SystemC environment. The other thing is there’s a group of people who create the internal SystemC environments and I’m sure they tell their management, ‘No, ours is much better than what Synopsys or Carbon or Mentor offer.’… There’s a job security thing,” he said. “The dirty little secret is being different is bad from an economic standpoint. If you’ve got one or two environments that are different, fine. But if you’ve got 5, 10, 15—the permutations and combinations of what an IP has to support goes through the roof.”

The good news is that for startups, this gets easier over time as products are refined over time by working with those heavy hitters.

Bill Neifert, chief technology officer at Carbon Design, sympathizes with startup woes: “When we first came out we were just a model-generation company. We compiled RTL onto a model that you could then put into a virtual environment, but we didn’t sell a virtual environment so we were at the mercy of whatever the customer had or whatever they were buying from someone else. We quickly saw that there was as many modeling styles as there were customers—and this was back in the day before TLM 2.0 existed, which made life even more difficult.”

Life has gotten a lot easier for Carbon as it now sells its own virtual prototype, but it still has the modeling tool while there are still customers who want to model with their own environment. “You find this especially with some of the guys that got into the system-leveling modeling stuff early—like ST, NXP, TI, Freescale, LSI—all of these guys have their own internal modeling environment and you have to get your stuff to work with theirs. If you don’t worry about accuracy, that’s not much of a problem.”

For engineering teams designing a processor for example, they don’t worry as much about bus accuracy as they do about the functional accuracy of their processor. “But if you’re worried a lot about bus accuracy, now you suddenly have to worry about it because TLM 2.0 did a great job of defining a mechanism to communicate but it didn’t put that next set of rules in that basically say, if you’re talking using this protocol, this is how you do it. So what happened is that everyone went off and created their own way to do that or they leveraged their existing way and cobbled it onto there. And a lot of times, even though they called it TLM 2.0, that doesn’t mean that their TLM 2.0 stuff works with anyone else’s TLM 2.0 stuff,” Neifert pointed out.

Johannes Stahl, director of product marketing for system-level solutions at Synopsys, sees things from a bit further down the path. “For companies that have done their own modeling, I saw this happening much more five years ago than I see this happening today. There are maybe one or two companies that have a 10-year-old methodology that’s very mature. They have built their own tooling around it. And, of course, it’s their own flavor of the methodology. Typically it’s a combination of having some of their own IP models with maybe specific hooks that they need to look at this IP in a certain way. Then also the process is built around the way that they typically build their chips. They typically know where to start and organize themselves but it’s really the exception.”

In terms of challenges, less of an issue today is finding the models, he noted. IP vendors and repositories like TLM Central (http://www.tlmcentral.com/), which now has 970 models published. Also, there is less of an issue with creating individual models as tooling has advanced.

“What remains is the real issue,” Stahl said. “Put the entire thing together and run a complex piece of software. These entire prototypes today are non-trivial pieces. The best observation I had on that was when I go back to the beginning of my career when I was doing RTL design. At that point in time, you would maybe have thousands of instances of RTL blocks at that level. Today, our customers have virtual prototypes that have thousands of instances of System C models at that abstract level. Once you start at that level, you have to set up your environment to find a problem that the software has with such a complex system, or, of course, you will also have problems in the prototype: The prototype would not be correct initially. So you actually have to work between the end users, the software team and the prototyping team to find those bugs that can be either in the software or on the board of the prototype.”

Neifert agreed the big challenge is how to bring all the pieces together. “How accurately do you want to do things? If you don’t care about getting down to cycles, you need to make sure that your extensions have some sort of a mapping into someone else’s extensions. If you’re just doing it at the approximately-timed level, that’s a pretty straightforward task because there are only so many ways to describe things and you’re not trying to be 100% accurate so it’s a matter of mapping a couple of things over, running a few tests and then saying you got it right. And if you didn’t get it right, well, it’s approximately timed, so that’s just an error—or that’s the area where it’s just a little more approximate than it would be otherwise.”

“As soon as you get into an area where you have to start worrying about being accurate, that’s when you’ve got to apply a lot more rigor to the solution and certainly that’s what we do here. We’ve had to put a lot of work into our adapters and such, which go from that 100% accurate environment up to less accurate environments,” he concluded.

At the end of the day, despite the amount of upfront work, virtual prototyping based on system-level models is giving engineering teams the insight needed to create the most optimized design possible.



Leave a Reply


(Note: This name will be displayed publicly)