Bridging The Gap

Tying together power-aware engineering and system-level design is a statement of direction. Some key pieces are still missing.

popularity

We talk all the time about hardware/software co-design, co-verification, etc., but what is really interesting now is how vital the connection is between power awareness and system-level design. Yes, it sounds obvious, but so far this is an untapped market with ideas still being batted around.

As discussed in Parts 1, 2 and 3 of my article series on energy and power, to achieve the best power and energy efficiency, both hardware and software must be optimized on all levels. This is an extremely challenging task.

Cary Chin of Synopsys pointed out that we have to understand everything from the transistor level all the way up to the software level. And, of course, there’s nothing today that really does that.

What that means is many disciplines must come to bear. “There are modeling issues, there are the evolution of the system-level tools kinds of issues. How can you do it? How can you get everything put together? Different tools and vendors have different ways of doing things, so standardization is a question. But it also is an area that therefore is really, really important to be thinking about right now. Things are starting to be developed and this is our opportunity to solve these problems in a way that will be as clean as it possibly can be—at least without anybody really being able to see the future,” Chin said.

“Wouldn’t it be great if we could just take a simulation of somebody writing an Android application and simulate that down to the transistor level?” he continued. “If we could do that through all of those levels of hierarchical decomposition down to the transistor level we would get an extremely accurate picture of what’s going on with power. But that simulation is going to take forever, and that’s exactly our problem. But that same hierarchy exists in terms of hardware design because we can’t run a SPICE simulation on a full chip and we’ve never been able to. And my guess is we will never be able to because chips will grow as fast as our capability to simulate at that level.”

To tame that complexity requires abstraction levels. “If we want to extend levels of abstraction from transistor to gate the RTL to system through the software layers and up to the application software level, that’s exactly what we have to do. It makes it so that it’s not as clean sounding of a solution, but in engineering practical is more important than clean,” Chin said. “The fact that we can actually do it and get some data helps us figure out how we can refine it. Accuracy of models follows that exact same development path, and always has. I think we’re just on the cusp of starting to look at RTL and higher levels and pushing up through the software stack.”

Historically, there hasn’t been a good way of even getting data in areas such as how different pieces are correlated. But as the technology has progressed in the last couple of years we’re starting to see the ability to gather data from RTL to the system, from the system to the basic software capabilities, and up through the operating systems and then to the application side of the software.

“As we push up there, the ability to gather data is one big thing that enables us to create these levels of abstraction because the big test of a model is how it holds up under as close to real-world as we know it. And that’s what we’re in the process of doing. I think we’re going to see a lot of data in the next couple of years and the hardware manufacturers know that,” Chin continued.

He pointed to the controversy over Facebook being able to gather information about browsing habits by looking at cookies, as a good example of this. Another example is Apple’s collection of location information. “These companies that operate at very high levels see exactly the same problem in slightly different domains, and what they are doing is gathering data. That’s what we should be doing from the design standpoint with regard to hardware and software, and especially at that connection, because software going from the low-level routines through the middleware up to the application software side is a little bit easier to trace.”

The hardware from the transistors through the gates to the RTL is reasonable to trace as well. “There’s that little bit of a jump where we move forward on the software side. That really is the key today, and that’s the area where we are making a lot of improvements. So I’m optimistic about this idea of modeling across the levels in order to see the whole picture,” Chin said.

Still, he believes that as an industry we still haven’t really fully embraced the idea of trying to merge these levels of abstraction from the hardware to software. “The design communities on both sides don’t talk a lot. It’s through the context of power that we are talking about being able to do stuff across that boundary. We need to continue to push on that, and it’s a big deal because the technology is starting to allow us to gather this data. Before it was basically do what you want but it doesn’t really matter because you’re going to have to start again at the software level and create all new models. Today you can derive models from other models. There’s lots of work going on the IP side where we are starting to re-use estimates from the semiconductor vendors for certain IP. There’s a lot of stuff happening that at least indicates that we can start to bridge that gap.”

From the software perspective, Mark Mitchell, director of embedded tools at Mentor Graphics observed that for a long time hardware designers had the idea of design for test, It wasn’t good enough to just make hardware that they thought was going to work. They, they had to really be able to poke at it.

“We are learning that we need to do the same thing on the software side, so some of the things we’ve been working on are: How do you create a piece of software that has a pretty low footprint, low impact on performance, but that is going to run on your deployed system so that even out there in the field you have a way of contacting it, communicating with it, trying to figure out why it is behaving the way it’s behaving? That’s pretty different because the traditional software design methodology has been, apply a lot of process up front, try to design and build it so it works, ship it, and assume it is correct at that point. But have you ever seen a piece of software that worked perfectly the first time? No. So maybe we ought to get religion on the idea that we expect these things to be somewhat broken as they go out the door and get the information back to fix them.”

~Ann Steffora Mutschler



Leave a Reply


(Note: This name will be displayed publicly)