High-Level Gaps Emerge

Experts at the table, part three: Defining and leveraging use models; the software connection; emulation; high-level modeling.


Semiconductor Engineering sat down to discuss the attributes of a high-level, front-end design flow with Bernard Murphy, CTO at Atrenta; Leah Clark, associate technical director for digital video technology at Broadcom; Phil Bishop, vice president of the system level design system & verification group at Cadence; and Jon McDonald, technical marketing engineer at Mentor Graphics. What follows are excerpts of that discussion. For part 1, click here. For part 2, click here.

SE: Where do we stand with use models today?

Bishop: The usage model of the design, we mess around, all of us do — clock gating, coarse grain clock gating, fine state machine optimization to minimize glitches and all this crazy stuff but the usage model is critically important — the software that’s actually running on this device. What does it really look like?

McDonald: And that’s the key. The usage model is driven by software so you’ve got to take into account the software that’s running and that’s going to totally change what your device does.

Clark: And for us, we don’t even control some of that software, so how do you optimize it? Another thing is the scalability because say you have something — like we have some designs where they can be one to four or eight channels — it changes the architecture. It changes the whole power distribution, first of all, but it changes a lot about it: different combination – what can be on, what’s off and what do you do with it…

Bishop: I think what’s really changed for all of us is that a lot of people used to do a lot of unit driven testing at a pretty low level, then slowly moved up to bigger vector suites and then you get things like UVM and some of these techniques. Now, we’re looking at full SoCs and the only was to test it with the right context and usage model is software-driven verification. You literally have to run drivers on these things in the design to get a feel for whether the power off mode really worked…

Clark: And then you’re talking about  and the power consumption in emulation is completely unrelated to the power consumption in an SoC — well not completely, but ….

Bishop: …it’s a different animal.

Clark: So you can check that your wake up works and your sleep works, and all that but you can’t get a measurement.

SE: I do hear rumblings about how emulation is being used earlier on in a more accurate fashion.

Bishop: Certainly, because the emulation is what it is, you can run huge amounts of RTL and gate level information in the emulator. That is different than running it in a functional simulator of some kind or a general software based simulator but it’s not the same as the real chip. I think what’s happened with all the emulation companies is we’ve kind of made some assessments of power analysis based on bulk CMOS. I’m very interested to see what that means in these new technologies. I don’t know what the correlation will be.

Murphy: It’s also a practical limitation so to do a halfway reasonable power estimation on an emulator, you have to dump all or most of the nodes all of the time, which mean that you’re going to run about that much of your software that you can actually get a reasonable power estimate. There are tricks to extend that and simplify it a little bit.

Clark: But aren’t your nodes different? What nodes are you talking about?

Murphy: I’m talking about purely RTL — I’m not even trying to correlate with silicon.

Bishop: Silicon nodes are definitely different. This is just different levels of abstraction and refinement. You’re getting closer to silicon but you’re not quite there. With the emulator, let’s say it’s one step away at least, or whatever as opposed to a typical simulator or going farther up the chain. As you get closer, yes you’re getting a little bit better estimate but it’s still not the exact same nodes, it’s still not the exact same power.

Clark: And there’s a cost for all of these low power structures you put in. You can measure the cost of the power versus the cost of testing it versus the cost of the silicon, and if you’re not using it that often you don’t want to waste that money on the low power implementation — you just want to run it for 1% of the time and let it go but again, the context….

McDonald: It goes back to understanding context and use case. If you don’t know how it’s going to be used, you don’t know what’s best.

Murphy: In fact, people have been saying for a long time that the semiconductor guys put all these knobs and switches and so on in the silicon that let you tune the power to the Nth degree, then it goes up to the software and they ignore it all … well, they don’t ignore all of it, but they ignore a lot of it.

Clark: But it’s speced out that it has to be there so there’s a disconnect between the spec writers and the software guys. If it’s there, it should be there for a reason.

McDonald: Typing back to the high level modeling, that’s where we’re seeing people trying to do more with creating the high level model that matches the spec so software now can develop and tune the software to the same execution model that the implementation teams are developing the implementation against. So if you’re doing that the problem i think comes in when software doesn’t see what’s been implemented until it’s implemented and now after the fact they’re going to go try and figure out what to do with it, they ignore a lot because it’s too complex but if they started with that up-front model, where they could contribute even into what gets implemented, because we’ve seen that a number of times where software developers have said, we’re doing this with the system and it’s not meeting performance goals. The hardware guy and the architecture team can go in and say, we can fix that…because they can use the use model to tune the architecture.

Murphy: I think the trick there is where do you intercept the software? You could intercept at the driver, you could intercept at the OS, you could intercept at the applications and probably a million places in between.

McDonald: What are you tuning your system to? It used to be systems were tuned to the OS. Do you tune the system to the OS or do you tune the system to the application? And I think for different use models, again, different markets, the answer is going to be different.

Clark: Those development processes aren’t happening in parallel necessarily because we sell to our customers, we develop a little bit of software but they’re doing most of the software. So then, how do you feed that back?

Bishop: The norm has been working with companies such as Broadcom as you’re working with the drivers because that’s part of the software that Broadcom does. The thing that’s very true is that you start to work with our customer’s customer and then their only concern is the application layer but the problem is that they are so far away from the hardware in any context that they say, I don’t know what an emulator is and I don’t want to know. That’s what a customer of a customer told us. So you start working with them and they really want to stay at more of a virtual platform level maybe — or even higher to some extent — because they say they don’t really want to know the underlying …

Murphy: They just want to know how long my battery is going to last.

McDonald: Is my battery life there? Does it turn on and come up in two seconds? Do I have a screen in two seconds, is it going to last 12 hours? And that’s the level that they think but that has huge implications on everything downstream.

Bishop: That application group, those software writers, we have found that they’ll work with you til the level of the virtual platform; below that, no. There’s just not going to be enough speed. 2MHz, 4MHz — whatever an emulator is going to run at — they can’t believe it. First of all, it’s so foreign to them to head down that path and then it runs so slow. By the time you’ve booted the OS and try to run the application, this is a real challenge so they’re trying to stay at the virtual platform level because the speed is at least there that you can run some of the applications.

SE: What are some of the issues happening with standards in this area?

Murphy: Do you think there’s going to be any kind of around? the hardware guys can do a million things but the software guys are doing something different coming down and they’re always going to adapt to the million things that could be done at the hardware level or is somebody going to say, ‘Time out; let’s define a standard way of managing power at the bare metal level.’ then the hardware guys can do whatever they want with that. The software guys are kind of decoupled from it, and they manage top down. Is that a realistic goal?

Bishop: We’re seeing companies attempt to do this, which takes the form of a platform wherein at the platform level you define how you’re going to manage the power, how you’re going to do certain things. And it’s really, the decoupling you mentioned, I think is at the middleware level. So if you look at a stack in most applications, the driver level is just too close, it’s a hardware extension. Even the OS is a hardware extension to me. the middleware guys — these are the guys who try to create a common framework so the software guys can be decoupled. That’s really the intent of that middleware level, and there are some groups doing that but it’s very much application oriented. For a mobile application, the middleware has a certain look to it inside a smartphone — it’s like each different device application has kind of a middleware level that’s defining the standard, if you will.

Clark: Just to come back and refine that — if you look at an example of a RISC processor where they the operations that were used the most and optimize those, what happens if six months down the line you realize you were wrong? This operation that’s really expensive is getting used all the time. Do you come back and revisit that?

Bishop: Yes, but it’s very slow. It’s a big cycle because you’re dealing with…folks at the application level just do not have any concept of the hardware — nor do they care — middleware is a little bit of knowledge. Most of the knowledge is the drivers and the OS, that development team understands what’s there. If they refine their area, then the middleware will take it into account and then it starts to find it’s way to an impact of some sort at the application level. But generally my answer to that would be, ‘No.’

McDonald: And that’s an area where modeling can help. One of the things we’re seeing our customers trying to do, and where modeling helps is if you can model something and get an application developer to use your virtual platform or model, and now the characteristics of what the application developer did can be used to drive the architecture and drive the implementation so you could identify things like that while the software developers/application guys are always trying to do this — over and over and over again they’re doing this — we’ve got to make that really fast. That’s where this flow both ways is important. The application developers are giving you the use case; by having that in a model, that model use case now can be used by the implementation team even to say, this is how this is going to be used. Now we know.

Clark: In practical terms, though, I don’t even know who those people are.

McDonald: You don’t but if your marketing teams had a virtual model that they gave to their customers and got feedback from their customers that they could give to you to say, this is how this thing is going to be used in a real application, what could you do with that data?

Clark: I think they would feed that back to the architects but not as low down as the implementation.

McDonald: Part of that’s because it’s captured very ad hoc today — we don’t models that go all the way down.

SE: Do we have an agreed upon approach to high level design today?

Bishop: I think each company has an approach and there’s some emerging standards but it’s not complete yet.

SE: Is it necessary to have one?

Bishop: I think it’s helpful.

McDonald: I think what we really need is an approach that allows us to leverage the modeling because the modeling, for high-level design, and especially virtual platform creation is one of the big challenges that people have. What you do with the model, how you use it, how you tie everything together can be a little more open to interpretation.

Murphy: I’ve had somebody ask me why don’t we just use System C to do RTL implementation instead of just get rid of the RTL and go straight from a virtual model somehow into an implementable piece of logic. I think we are a long way away from that. I think there’s so many implementation messy details in putting that whole thing together that you’d have to take out the beauty of System C and throw a bunch of ugliness on top of it to handle I/Os and a bunch of funky mixed signal stuff and power — these things will come in time maybe but then I worry about the IP-XACT effect. You can do it, you can make it work, but was it worth it?

Clark: One of the big issues we have with our modeling is that all the tools that we use in implementation are digital and they expect the world to be on or off, or, one or zero and that’s just not the way the world is. We’re trying to shoehorn I/O cells into this digital modeling language and I think that if you create a language that is rich enough to express everything we want to do, it’s going to be intractable. Where do you draw that line? Ones and zeros is not it; it has to be a little bit bigger than that, but where, and how do you define it?

McDonald: It’s different based on your problem, based on what you’re doing your perspective is going to be different.

Clark: It’s different based on your problem and your level of abstraction but if you want to be able to encompass these things that our circuits do, like you said, it would be adding a lot of ugliness to the clean lines of System C.

Bishop: I think you need RTL, and you’ll continue to need RTL for integration; I think it’s just the way it’s going to be for quite some time. And when you take into account things like finFETs and other new devices, it’ll even be more the case. I think, however, you need System C to bridge the gap between the software realm and the hardware design. I think that’s what it does at its best: whether it’s TLM 2.0 one level for virtual platforms and moving up to the application level eventually, or whether it’s TLM 1.0 that can be used for synthesis purposes and getting out RTL. I think System C is a good bridge to get you between the application development and the software realm, all the way down to the hardware. But I think if you really consider the case of eliminating all RTL and saying it’s all going to be System C, that’s an amazingly difficult challenge because the integration aspects and the PPA — all the way down to the physical domain. Gosh, there are so many tools and approaches.

Clark: It’s taking away some of the control because sometimes you want this gate right here and you still need to be able to say, I want this gate right here.

Murphy: Try to imagine this stuff through a keyhole and still get it right. It’s not worth it; what does it buy you?

McDonald: I think the processes we have with RTL design are very good and they’re going to continue to be used. Higher levels of abstraction feed into those processes and leverage use models and understanding what you should target can be done from the System C world and the virtual platform world but it doesn’t mean you’re going to design from there — you still need the RTL design.

SE: What would help you the most get your job done today in terms of EDA tools?

Clark: In terms of EDA tools, one of the issues that I face a lot with a lot of companies is inertia. If I’m working with a new product or a new company, they’re really excited and they’re very amenable to adding knobs and adding features and supporting what I want to do. Then, companies with a lot of inertia and a huge customer base, I’m asking for things and they’re giving me schedules like 15 months out, and that’s not going to help me — my product will be taped out, we’re onto the next problem by then, and so, some way to get a little bit more agility would be the biggest help because you don’t know what problem you’re going to hit and it takes six months to find the right person to talk to, and then it takes them six months to tell you when they’re going to do it. Then a few months to implement it, and it’s really frustrating. I end up having to just be mean and yell at the people, and I hate it. I don’t know what the answer is, but it’s very frustrating to see such a simple problem — it’s like, can you make the error messages better? Well, we’ll put that on the schedule for the next major release. Really? It’s an error message. How hard is it? And there’s so much inertia, to steer them is very difficult and I think that’s one of my biggest problems. And then the small companies get acquired typically and they keep their agility for a while and then they slow down and start lumbering along, and then someone new pops up and Mentor buys them or Cadence buys them. It’s just this ongoing cycle, but that inertia is really hard to overcome.

Murphy: I think the vendors would share the same frustration but they understand where it’s coming from too, which is it’s the process for a Cadence or a Mentor to put out software releases to thousands of customers and have them really crisp and polished and so on…

Clark: But some stuff effects the QoR, and other stuff doesn’t. And it doesn’t seem like there is … there’s not always what I consider intelligence used to sort through the triage of the features. Why can’t you give me more information? And the older the product, the harder it is.

Murphy: We share your frustration. It doesn’t necessarily mean we can change it.

McDonald: It’s kind of the same thing we were talking about earlier with the use model. The EDA tool vendors have developed their tools with a particular use model and now you’re coming up with a new use model or a slightly tweaked use model.

Leave a Reply

(Note: This name will be displayed publicly)