Know What To Look For

It’s not possible to verify all of the operating scenarios within different power domains in complex SoCs. So what should you be looking for?

popularity

With the number of power domains exploding in today’s ICs, it’s extremely difficult to include all different modes of complexity in the verification.

“The problem was already challenging enough,” observed Mark Baker, director of product marketing at Atrenta. “Just looking at where SoC design was going was a collection of various IPs, the different communication protocols, the bus fabric complexity — and then here comes the mobile market on top of it with all the demands on battery life and the need to sustain a long-term battery charge on your cell phone and any portable devices, and then access to all these cool and wonderful applications. What it has forced are significant new challenges on top of the already existing challenges for verification. Specifically, the power verification piece is really divided.”

To some, it’s more of a static, structural type verification. On the other side, the functional piece is also quite a significant problem.

“Power itself brings in a new element,” Baker said. “You’re looking at the challenges of all of these shutdown states and power up sequences and the idea of trying to save battery life by managing through a number of different voltage domains. All of these provide a new set of challenges to the already high complexity. So can we [verify all of the operating scenarios]? We have to. We have to be able to simulate or verify across all these different operating modes or we’re going to go down into some pretty significant black holes in terms of where the device is going to end up.”

This challenge for the EDA industry is how to manage that with abstraction/reduction techniques in order to be able to provide a reasonable turnaround time for a verification strategy, Baker continued. “What the industry will be looking at continually is this full chip versus top-down/bottom-up strategy and abstraction. It will all be about abstraction. If you look at IP verification and building blocks on top of that SoC level, and being smarter about your modeling and being able to understand the context of the IP within the SoC context. That’s where the focus will continue to be.”

From a high-level perspective, one of the ways to have the battery run longer is to see how to save on the die.

“Traditional techniques of clock gating don’t help to save on power because as you go deeper into the technology – 28nm and below – leakage is the dominant factor,” noted Girish Gaikwad, director of SoC engineering at design house Open-Silicon. “There is no way to deal with that but to shut down your power domains. Power domains can be controlled dynamically based on certain functional conditions, such as switching off or on certain power domains in the chip. Another way is to adaptively scale up and down the voltage and frequencies by looking to the operating conditions of the chip (temperature, etc.) so that you are still able to increase the battery life. For both of these scenarios, dynamic or adaptive power techniques are still not addressed by any of the FPGA prototypes.”

He noted that emulation vendors are pushing low-power verification as a solution, but he said the best solution is still simulators using power formats (UPF and CPF). “That’s a very time consuming process,” said Gaikwad. “With isolation cells, level shifters and power gates coming in the netlist, gate-level simulations are a nightmare.”

He acknowledged that the verification flow is there – the tools are available – but they need to do this verification at the RTL and netlist phases. “Of course in RTL there is no sense of power and voltage and current, but there are tools that allow you to mimic those. But when you go to the gate-level simulations the same tools will allow you to get the real feel of it. Having said this, I still don’t see that RTL is going to some silicon like for an FPGA, and there I can see some voltage shut off and bring up – that is still not possible when given a real scenario. It’s good enough when there are one, two or three power domains, but now designs regularly contain 15 or 20 power domains. It’s not only dynamic, it’s adaptive. It’s a challenge.”

This begs the question whether it is possible to verify all of the operating scenarios within different power domains in an SoC today without it taking weeks, or longer – with reduced simulation time but not compromising on the accuracy.

“The answer is no,” according to Koorosh Nazifi, engineering group director for low power and mixed signal initiatives at Cadence, “if I look at the problem statement in a narrowed scope, which has only to do with complexity introduced given different modes of operations, given the number of power domains and potential large number of different modes that could be theoretically generated.”

Nazifi noted the cases in which there were potential scenarios for up to 80,000 or more power modes generated based on the number of power domains. “It can be a pretty complex situation, but the reality is that what we recommend to our customers in this particular space is very similar to our recommendation for functional verification methodologies.”

One approach to functional verification methodology is what Cadence refers to as metric-driven verification. The first aspect of this has to do with creating a top level plan, which could be the functional specification for the design that needs to be verified. From that top-level verification/specification, the required metrics and coverage that need to be monitored during the actual verification cycle is extracted so the appropriate reports can be generated with regards to how well the test benches are exercising the circuit based on that top-level verification. The engineering team can then see where the gaps are, where they need to increase the coverage and add more test vectors, he explained.

This has been extended to low-power whereby the power specification as defined in CPF (and soon UPF) can be used and the power intent can be extracted to generate the appropriate metric and coverage points, as well as assertions.

In addition to functional verification, physical and electrical design verification is growing in importance for SoCs today, asserted Arvind Shanmugavel, director of application engineering at ANSYS-Apache. “There are a whole lot of things going on with the functional side where you need to verify the power intent. However, once you implement the power intent correctly you have to verify the physical and electrical aspect of implementation. Ensuring that you are performing noise coupling simulations between multiple power domains, off-state leakage simulations, power-up simulations are all part of physical and electrical validation. All of these are a big part of power-aware verification.”

He reminded that power-aware design on the physical side comes first of course, from the selection of the power grid architecture, selection of the package, selection of the number of power domains – all of those are related to power-aware design and the designer needs to have a very clear idea of how they are going to design the actual IC with the power in mind.

Still, whatever method is chosen, there is logic behind why 15 and 20 domains are even possible, offered Lawrence Loh, vice president of engineering at Jasper Design Automation. “Let’s say you have 20 power domains. Prior to the power domain stage, system-level verification was already a challenge by itself — how to get enough coverage. At the system level with one domain it’s not possible to try all possibilities, not to mention when you take that to 20. There is a reason why 20 power domains can be at least pseudo-independently operated, and that is because the architect defines which of the possible domain combinations is legal — we call it power states. When you factor that in, there are a large handful of power states that are legal. So we narrow it down and then it depends on the verification methodology you use.”

The most common approach today is to do somewhat brute force simulation, Loh said, along with power-aware simulation, which have a lot of problems in terms of exhaustiveness and the fact that you must use a lot of judgment as far as what power states are possible and what situation makes sense. Even with many power domains, constrained random techniques are used just as with one domain to determine the important things to cover. Alternatively, there are quite a lot of new capabilities on different tools that can help address different aspects of the verification.

When engineering teams start splitting power domains into the teens and twenties, it’s not because they like to do that, it’s because the power requirements are so aggressive, Loh said.

To determine the power requirements, a calculation is made to determine how many power states there are. The next step is to figure out how these power states are supposed to work together and why. The tricky part typically comes from how different power domains are being turned on and off and how that affects the functionality. It boils down to the engineering team knowing what to look for, he concluded.



Leave a Reply


(Note: This name will be displayed publicly)