Is there really as much divisiveness as there appears when it comes to power?
The entire topic of power analysis, estimation, and exploration never ceases to amaze me given that opinions on how to approach this are so varied. It is that diversity of opinions that makes it that much more intriguing.
I’ve been particularly interested in how I now regularly hear that users are increasingly interested in taking activity from simulation and emulation and using that to drive implementation and sign-off.
There are development efforts underway at a number of EDA vendors for dynamic power optimization, which requires simulation information. I’ve been told that as little as a few years ago, users really didn’t know how to do that but today are actively taking simulation information and using it during power optimization.
The drive to obtain a decent power number early on is only increasing, to be sure. And this leads to a discussion of accuracy, of which the opinions can vary widely.
Most amusing to me is when someone asserts an opinion that turns out not to be as contrarian as they might have thought.
In fact, it turns out that most people agree that for doing high level estimation or exploration, ‘a reasonable level of accuracy,’ is sufficient. Some say if that number is within 10 to 15% of what power will be in silicon, that is enough.
In addition to accuracy concerns, when it comes to using power analysis with emulation, for instance, some users don’t just want to look at one power number for the entire OS boot-up sequence for something like an entire video frame. Rather, they want to look at a power waveform, i.e., what the design is doing as a function of time. Here, giving up a little accuracy can afford faster speeds, along with the ability to get the shape of the power waveform right — even if they can’t get the magnitude.
At the same time, I would venture to say that most people would agree that correlation through the design flow is also very important, such as carrying the stimulus, and simulation information from early RTL analysis directly forward to implementation. All of that information is used to ensure there is good correlation, and good accuracy during implementation.
So really, when it comes down to it, there is likely more agreement than not, which is not such a bad thing given everyone is trying to reach the same goals.
Leave a Reply