Experts at the table, part 3: EDA vendors respond to the challenges posed by the chip companies.
Semiconductor Engineering sat down to discuss mixed-signal/low-power IC design with Phil Matthews, director of engineering at Silicon Labs; Yanning Lu, director of analog IC design at Ambiq Micro; Krishna Balachandran, director of low power solutions marketing at Cadence Design Systems; Geoffrey Ying, director of product marketing, AMS Group, Synopsys; and Mick Tegethoff, director of AMS marketing, Mentor Graphics. What follows are excerpts of that conversation. To view part one, click here; part two is here.
SE: Is there a problem with mixed-signal power estimation?
Balachandran: If your power connections are all driven by a CPF or a UPF, and it understands both the analog and the digital connections, with a single driver you have a shot at being able to verify it as you go into implementation. If you follow a mixed-mode flow between CPF and UPF, then you run into all kinds of issues in this mixed-signal, low-power design. So some of it might be that some designers in the companies are using CPF in one part and the UPF IEEE 1801 standard for the other part, and that might cause problems. For example, you need not only logic equivalence checks to make sure that level shifters are inserted and they are not redundant, but that the voltage levels of the inserted level shifters are also in the range supported by those that are inserted. Or did the synthesis tool insert the wrong ones? Those kinds of checks are also needed. That’s very critical in a mixed-signal, low-power design—even more so than in a digital design, because you have so many voltages hanging around. In regard to power estimation, tools do exist that have some of the features that are needed—maybe not every one of them, but I know of at least one tool that can calculate the power estimate based on the power mode or power domain in the power format. This helps to evaluate the impact of power in a particular mode of operation, in a sleep mode or in a wake-up mode. There are tools that already do this in the industry. So you don’t have to use spreadsheets unless you want to. But in terms of modeling all the analog effects, we don’t have a standard for that. It’s always been that the EDA companies have been a step ahead of the formats. So the formats and the standardization come later. Before the standardization, there are usually switches in the tools or some other techniques. That’s been the kind of the modus operandi for all these customers in the past.
SE: How about with UPF 1801?
Balachandran: That will take time for users to move to that. Even now, the tools are ahead of the format. In some cases, the format is still playing catch-up. It takes a long time from the day the standard is proposed. It takes two to three years for the format to get ratified, and then it takes another two years for the customers to start adopting it. They have to understand the syntax and the semantics. There’s usually confusion about the semantics of the format when it gets released. And then it has to go back again to the standardization body for clarifications, and so it’s another two years. All told, it’s about a five-year process for the formats to settle down and for customers to be able to use it effectively in a design flow. Customers cannot wait. They have a chip now. They have to get it done now. They have to squeeze the power now. EDA companies are responding to that by saying this is how you do it in the tools. There are some special features in the tools that handle it. Many of the tools do it today. They do different things, of course, but they don’t do it the same way. The problem is more acutely perceived in a mixed-vendor flow. That’s what the customers are facing today, in the absence of completely well-defined standards for modeling. That’s my take on it.
Ying: I agree. A lot of times standardization is a very, very challenging process because you’re dealing with different scenarios. 1801 is definitely the right thing to do, moving in that direction, but it’s still not enough. I don’t have a silver bullet to say here’s how we can address this. We need to work on it harder.
Tegethoff: I’m going to take a little different dig to it here, because we talked a lot about these different power modes, shutting something down, bringing it back up, in rush currents, and all of these things, parasitics, V(t) effects and parasitics. When I think of the verification problem, I see it as a coverage problem, a test problem. How many things do I need to do to make sure everything is going to run? With the analog portion, I’ve got to run sweeps, I’ve got to run corners. I want to understand where all my worst-case corners are. I’ve got to run Monte Carlo, but how much Monte Carlo do I do? How many iterations do I have to do? Now I’ve got to include the device noise of the devices, the random device noise. Now I got to do a pulse layout. When I think of power modes, it’s almost like you add a new vector that complicates this. Now I’ve got to do all this again on each power mode. Now I think about the layout proximity. You can see the explosion of complexity every new variable can add. As you’re pushing yourself tighter and tighter on voltage margins, tighter and tighter on power budget, the cost of a mistake is very high. It’s really very complex. Beyond all of these things, even if we had all of the right standards, how many of these simulations am I going to run? How many servers do I need? How many licenses do I need before I can take this thing out and not lose my market? Power is just another vector that you’ve got to deal with.
SE: Indeed, but a very important one.
Lu: Traditional I/Os are regular, normal-power power chips. In their standby mode, they burn tens of microamps, hundreds of microamps, right? In our chips, in the low-power, standby mode, we burn hundreds of nanoamps. If you look at the spec, especially for the lower-geometry processes, if you look at those I/Os’ power profile, they leak tens of nanoamps. How do you make a chip that can consume hundreds of nanoamps? Definitely, there’s some challenge there. Because of the lack of IP, we have to do some internal, in-house development to drastically reduce the power. Generally, it’s a structural thing. We need to have better tools and better IP to bring us to the next level.
Balachandran: That’s a very good summary. It’s kind of like a perfect storm. You need support from the process technology side, which is why the foundries started introducing low-power processes in mature nodes. That was to help the customers. They did their part. EDA companies are trying to put tool flows together. Early visibility for power is very important because that’s where we make a lot of the very crucial architecture decisions. Having that piece is also very important. There have been a lot of problems with verification today. Assembling the chip is not a trivial task. Making sure that everything is connected properly and assembled as in the place-and-route, the physical part of it, that’s very, very important. A lot of things can go wrong in a mixed-signal design, and a lot more things can go wrong in a mixed-signal, low-power design, as we’re seeing with customers. That requires a very vigorous enforcement of policies that are present in the power format, and for the tools to follow them diligently. There are two styles of assembling a mixed-signal low power design that tools need to support. There’s an analog on top, where the assembly is mostly done in a custom analog environment, and there’s a digital on top, where it’s done mostly in a digital environment. Tools need to support both flows in both environments with a total understanding of all the connectivity information, including all the voltage levels. If that’s not handled properly, you aren’t going to get a working chip.
Part 1 Mixed-Signal/Low-Power Design
Adding ultra-low-power requirements to a device design is complicating the traditional process of mixed-signal IC design.
Part 2 Mixed-Signal/Low Power Design
Getting analog and digital design teams to work together; the chip company executives challenge the EDA vendors.
Mixed-Signal Design Powers Ahead
The design and verification of current and next gen mixed-signal designs require a deep understanding of how it will be used.