Experts at the Table: Black Belt Power Management

Last of three parts: How black boxes communicate; hard IP versus soft makes a difference; tips for lowest-power designs.


By Ann Steffora Mutschler
Low-Power/High-Performance Engineering sat down to discuss rising integration challenges caused by an increasing amount of black-box IP with Qi Wang, technical marketing group director, solutions marketing, for the low-power and mixed-signal group at Cadence; J. Bhasker, architect at eSilicon Corp.; Navraj Nandra, senior director of product marketing for analog and mixed signal IP in the solutions group at Synopsys; and Kiran Vittal, senior director of product marketing at Atrenta. What follows are excerpts of that discussion.

LPHP: How do these black boxes talk to each other?
Wang: What I hear, and what Gary Smith is saying, is that going forward because of the cost, because of the time to market, if you can use the black box more and more…the architectural innovation comes at the star IP level. The full platform-based IPs are already there. People pick up some standard like Snapdragon or OMAP and put it at the base of the system. But for some kind of differentiation IPs, that’s where you need to create some new architecture that’s more aggressive. But there are always checks and balances. If my whole system is okay, but if any of the features have star IP, which includes very advanced features, how do you put them together? One example is Vdds. If you have IP designed for ultra-low power Vdds—0.50 or 0.75 volts—but the rest of the IPs operate at 1 volt, then you have to make a choice. You need additional cost to supply two different supplies, or you have to suffer the loss of the benefit you have for that star IP. It has become a much bigger system-level problem. Another aspect involves communication. So far we’ve talked about functional communication between the black boxes. I would also add on an implementation perspective, so that when my IP developer creates this IP I have some constraints that I want the SoC guys to carry over. That level of communication is slightly different from the functional communication. It’s more about the implementation level. These kinds of constraints include functional, physical—which could be mixed-signal where there’s a lot of special routing—high-speed routing, shielding, as well as electrical constraints, and they all somehow need to be
embedded in the IP and sent over to the SoC guys. They need to first understand the constraints and then make sure the SoC implementation honors those constraints as well.

LPHP: Are we doing all of that today?
Wang: Some, but not all. One good example is Open Access. Open Access is the database between the analog and digital world. There’s a class of APIs to specify constraints so there are some constraints you can do and there are more you can do with advanced nodes. It’s a good way to keep job security. We always have new problems that come up. We invented the problem and created the solution. That’s a good platform we can think about, but to talk about all the constraints—we are not there yet. We are moving in that direction. With advanced nodes with mixed-signal/analog design and so much process variability, people have to adopt a new methodology—constraint-driven design. People have put more and more constraints in analog/mixed-signal design.
Bhasker: A different aspect of this black-box IP is whether the IP is in soft form or hard form. The bigger challenge is with hard IP for these big IP blocks because not everyone is going to provide, let’s say, a CPU in all different flavors for all the technologies. That causes a problem for the SoC implementer because now if he gets one CPU in one technology and a GPU in another technology, he has to sit and scratch his head and figure out what to do. I think we are going to get into the area where people have these big SoCs that are only available in certain flavors and they have to work around it because there’s no other way for them. They could go for the soft IP, which maybe costs multimillion dollars that they cannot afford, but if they are restricted to hard IPs they have to solve a very difficult problem.
Vittal: That is where you make most of the mistakes because you don’t know the internals and you are forced to write the interface, and when you write the interface you’re obviously not exercising it. I know of a particular design that failed because everything was perfect but there was some bug in the interface they wrote to make the specifications on both sides.

LPHP: What do we need to make the issues with hard IP more visible?
Bhasker: Clearly verification is an important aspect of it.

LPHP: What does the IP need to provide to ease the pain of design teams?
Vittal: There are these standards that describe the interfaces and once you have the executable standard you can verify and make sure things are correct. I don’t know how extensively it’s being used today and what information is provided in those standards, but certainly it’s a direction to go to.
Wang: Let’s even assume the IP provider can provide 100% of the information needed by the SoC guys. You still need to verify at the system level and make sure that the IP works. You just cannot escape that. You cannot assume the IPs work. Even if my understanding of the IP working is exactly the same as the IP designer’s, you still cannot guarantee it will work at the system level. That’s why we see an explosion in system-level validation techniques in the recent years. One example is hardware acceleration and simulation/emulation technologies. People just don’t have the confidence of their system anymore because before they can visually analyze it to see how it works. Now they say, ‘I have no idea what is going to work, so let’s push it up on the hardware box and run simulation—billions of cycles—just to annoy one customer, which is better than annoying millions of customers.
Bhasker: This is a very valid point. The problem is that the black box IP also has bugs in it. In addition, you could be using it the wrong way. The challenge is trying to find the problem. You’re busy verifying your system and in turn you find the IP is not functioning. You know what the IP provider will tell you: ‘It’s not my problem.’ And you end up isolating the IP, isolating the test case, and expending a lot of effort.
Vittal: Even if you provide core RTL, that’s very difficult to understand. It’s not like C. The problem is you can miss something very easily. It’s not like software. There’s a big difference.
Nandra: Actually our IP, in the true sense, are the testbenches. When I’m presenting at a conference I’ll show schematics and block diagrams. What I don’t show are the testbenches. That’s where we’ve really put our intellectual capability.
Vittal: Another important issue that we have to think about it is when the testbenches will be available during the design cycle. In many cases they are not available until very late. Let’s say you are designing for power and you are designing the module. People don’t even know what testbenches, what test vectors are going to go through at that point.

LPHP: What overall concepts are crucial for low-power design?
Bhasker: A lot of people don’t understand the basics of how power is calculated, where it is coming from, what the tradeoffs are, how to decide about technologies, what kind of architectures you need for more power or less power – it’s a tradeoff. You cannot optimize for power, area, speed and schedule all at the same time. People want all four, but you cannot. They want everything in six months: it’s not possible.
Vittal: If you take any design, there’s a lot of redundancy, there’s a lot of stuff we do that we don’t have to do. If you take a register and a clock—that’s unnecessary in the design. So there’s a lot of stuff that we can get rid of. There’s a lot of work that can be done to only compute what you need.
Wang: You may have very technological reasons but they may not be very good economic or business reasons. People always try to create a new problem, find a new market, so sometimes it’s not really against the technology. ARM calls this ‘dark silicon.’ Most of the time in the future you will have what looks like the sky and the stars come up left and right when you are doing the computing. That’s maybe the future because everybody wants to sell more silicon.
Vittal: Power is a big issue. If you take today’s designs, 80% is stuff you don’t have to do.
Nandra: This is what we’ve learned on the IP side because we’re not building SoCs. We’re building blocks that we repeat many times over in different technology nodes. Our latest 28nm designs have less stuff in them than what we did at 130nm because we’ve seen this in production over 10 years and we’ve realized that level shifter, register, clock was superfluous. At the time when we did the 130nm design, the engineer thought we absolutely needed this because it’s some kind of redundancy or fail-safe mechanism, but we realized over time that you can actually simplify the circuitry, which works at the IP level. It doesn’t work at the SoC level because at the SoC level you want to add stuff. The way to get power consumption down is really simple—you remove stuff. You keep removing stuff until the circuit stops working, and then you add something to make it work. That’s your lowest power.

Additional resources:
Part one of this series.
Part two of this series.

Leave a Reply

(Note: This name will be displayed publicly)