Some tools already exist, but there is a widespread lack of understanding about how to use those tools effectively and what problems need to be solved.
The issue of power optimization today is very painful for many chip architects who are tasked with determining, meeting and holding to a tight power envelope. Questions concerning how well and to what extent power can truly be understood at the architectural level, let alone optimized, are the subject of debate.
The ITRS’s most recent projection provides some insight as to current market drivers. The following figure illustrates that the power consumption trend versus power requirements is creating the “Power Gap” akin to the “Design Gap” that the industry dealt with a decade ago, noted Vic Kulkarni, senior VP and general manager at Apache Design Solutions. “This gap is forcing people to think hard on how to manage power at all levels of abstraction.”
With mainstream users, there is no controversy about whether abstraction of power and performance needs to shift higher in the designer’s mindset. There is no choice. Designers must shift their thinking from high accuracy/power validation to relative power/power exploration, but making that shift is easier said than done. Designers are not typically accustomed to thinking that way, barring a few architects at some of the largest and most advanced chip companies.
From a technical perspective, the role of power at different levels of abstraction, as well as its nuances and characteristics, is not always well understood.
“There is a need for a lot of education here,” said Shabtay Matalon, Mentor’s ESL market development manager. “I think that power is mostly understood at the transistor level because that’s where people can very tightly correlate power with transistor switching activity, with the threshold levels, with Vdd, and all of those basic equations that people can calculate the static power and dynamic power.”
At the gate level, design engineers still have a good understanding of problems because that involves a relatively low level of abstraction. They can change states on those gates and still clock the flops. Move further along in the flow and things become less clear, however.
“When you go to the RTL things become very vague. Frankly, the challenge here is that there is a limitation what level of accuracy you get at the RTL,” Matalon noted. Raising that to transaction-level modeling (TLM) will offer some relief here, but not until there is more education about how to use these ESL approaches. “That’s the reality. We are dealing with this reality in the marketplace. However, what works well in favor of dealing with power at the TLM is that the payoff is huge, in terms of power optimization.”
Moving up in abstraction from the gate level to RTL it is possible to achieve approximately 5% to 10% improvement in power optimization. “Given that the payoff is so high at the architectural level (up to 80%), on one hand we are seeing that there is a lot of attention to it but on the other hand, I can’t say that the knowledge is yet prolific,” Matalon added.
Where power optimization occurs
While power needs to be planned for at the architectural level, the real optimization of that power happen further down the flow.
“Power optimization really happens purely on the hardware level and purely on the RTL down level so you have all these cool techniques starting at RTL down,” said Frank Schirrmeister, director of product marketing for system level solutions at Synopsys.
The following illustration presented by Synopsys at ARM’s TechCon shows the impact of power optimization techniques at different levels of abstraction and stages in the design.
“One triangle identifies the leverage you have, and the other identifies the time you need to implement, which is the cycle time. The earlier you start, obviously the more impact you have (shown by the wider part of the inverse triangle) and you need less time to do it because you have a shorter cycle time and you can still make changes,” Schirrmeister explained.
Today the majority of techniques are employed at the RTL on the hardware side with the software then trying to optimize things like cache utilization by itself on fixed hardware.
“So now the objective has to be, given the very intuitive notion, that the earlier you start the more impact you have and the more leverage you have on power consumption. We need to move upwards,” Schirrmeister said. “On the architectural level, before you have even decided between hardware and software, you will try to make very early considerations about how to separate it into hardware and software. That’s the architectural analysis part and what people are doing there is really around taking abstract descriptions of the software and the function and figuring out whether the architecture will actually support that. Once you have made the decisions between hardware and software you really have a couple of components running in parallel— in the wider sense it’s block design and how those blocks integrate.”
Bringing these concepts together, Apache believes that an ESL power design flow can be realized by leveraging ESL simulation, ESL synthesis to RTL and RTL power analysis using ESL simulation results. Kulkarni stressed this has been demonstrated successfully by working closely with an ecosystem of an IP provider and a system company.
In addition, virtual prototyping will play a vital role in this upcoming ESL power design flow. As just one example, Mentor’s Vista can be extended to the virtual prototyping space. Traditionally, virtual prototypes have been perceived as just a functional model that only allows people to validate—to do verification of software against a hardware model. This landscape is changing today because software is becoming such a dominant part of a modern design. So much design know-how and IP is implemented in software, and software has such a major role in setting the performance and the power that it is no longer sufficient to provide legacy virtual prototypes that only are functional. Software engineers need a model that is a more hardware-aware virtual prototype where power and timing are modeled so they can evaluate based on their software getting not only the functional spec of the design, but the spec of the design in terms of performance and power, Mentor’s Matalon observed.
Synopsys plays heavily in the virtual prototyping space, while Cadence is still mum on the topic, focusing more on its hardware emulation approach.
The power optimization challenge
Still, as we go up in abstraction, designers need to try to understand software issues and how the software relates to hardware.
“As you get into advanced power management schemes in the hardware a lot of times that’s controlled by the software, so how effective is the software at doing that?” asks Jack Erickson, product marketing director at Cadence. “Ideally, you’d like to be able to run your software on your hardware and have a better understanding of how much power is consumed by software. The more you can do in the software before you ship the product the better. If you can get your chip into emulation and run your actual software in emulation and examine the power effects of your software, that’s late in the hardware design cycle, but you can have large effects on your software before you ship your system.”
The biggest challenge is that it’s really a new space, he said. “Folks have started to worry about power only in the past few years and now we’re also talking about moving up to a higher level where they have even less experience typically. The combination of those two is very difficult,” Erickson concluded.
Leave a Reply