Mixed-Signal/Low-Power Design

Experts at the table, part 1: Adding ultra-low-power requirements to a device design is complicating the traditional process of mixed-signal IC design.

popularity

Semiconductor Engineering sat down to discuss mixed-signal/low-power IC design with Phil Matthews, director of engineering at Silicon Labs; Yanning Lu, director of analog IC design at Ambiq Micro; Krishna Balachandran, director of low power solutions marketing at Cadence; Geoffrey Ying, director of product marketing, AMS Group, Synopsys; and Mick Tegethoff, director of AMS marketing, Mentor Graphics.

SE: Analog/mixed-signal, mixed-signal design has always been tough. But now we’ve got low power to consider, also. How is that making the whole process tougher?

Lu: Much tougher. First of all, you need to gather an accurate model in order to have a reliable design. We found that the back-end timing is much more difficult to close due to the sensitivity of the transistors to the process variations.

Matthews: There are multiple aspects to it, as well. I agree with what Yanning was saying that the models are important. One thing that we’re finding with the type of chips we’re making – mixed signal, low power – we’re using processes that have already been through the fab. First, we were at 180 nanometers. Then you go to 90. Then you’re looking at the next node. When the foundries add flash, when they add the low-power features, it has changed the process. So even though you can say, ‘Oh, 90, that’s a 15-year-old process,’ when they add the new features to it, it changes it. While the models are there, and they’ve been there for a long time, the performance of the device is changed when they add flash and when they add low power. On top of that, the way we have to use the transistors, we’re not using them in that range that they carefully modeled for. Initially we’re using them either at a much lower voltage level, places that the fabs have not typically modeled the performance of the device very carefully. On the mixed-signal side, it’s a lot of cooperation between what the analog design team has to do and what the digital design team has to do. For that, we have to do a lot of top-level, mixed-mode simulation, talking about analog circuits, digital circuits. You do your best to model the analog circuits as much as you can so that you can get fast cycles through, but at the end of the day it really does come down to at some point having to do that top-level simulation with a digital representative of Verilog and the analog in a circuit simulator of some sort. That’s also a big challenge for us—to get to the point where you can simulate a chip in some useful modes at that level of complexity.

Tegethoff: With low power, even if you just look at an analog block, today there’s still quite a bit of margin that gets designed in on an analog functionality, and a lot of the way you add margin to an analog design is by increasing power. Even before you get to mixed signal, you have to become much more accurate in how you characterize your analog block, for noise, for all of these things, so you’re not using power just to meet the spec of the chip. In an ultra-low-power situation that becomes even more of a concern. In addition to that, you have to do the mixed signal. But if you were really in an IoT world, you have something that’s going to have to operate on a battery for 10 years. You’re going to have to know where every tiny bit of power is going. There’s still a lot of slack in analog design. You just throw more current in it and the noise goes down. That’s how they do it.

Matthews: All engineers want margin—not just analog designers, but all engineers. There’s always going to be a lot of hooks in place, at least for what you have to do, because you know that the models may not be 100% accurate. So they may have to build in some margin because of that. On top of that, you have to add more complexity to your design to take into account that some things may not work as you originally expected. You may have to increase your refresh rates on certain things. So you have to take some of those things into account, as well. And maybe on the positive side, you can actually reduce some things.

Tegethoff: I agree.

Ying: Because the devices are operating at a sub-zero-threshold level, the characterization of models [are] usually very crude from the foundry. It creates discontinuity in the model, and with the circuit simulator it just causes time-step problems. Whatever the problem, ill-conditioned matrix or whatever, it becomes very nasty. What we do with a customer with a new process is requalify the models with them, together with the foundry, to make sure in the sub-structural region everything is proper. That’s one aspect. For mixed-signal simulation, we do see a lot of these challenges, particularly now with low-power design. For transistor level, we always had low power. Voltages continue, so it’s not a problem. On the digital side, with UPF (Unified Power Format), that creates a lot of new challenges to the tools to understand the UPF so we can do the interface properly for co-simulation purposes. Those are some of the new challenges that we see working with IoT customers. And then, a lot of times there are multiple-voltage domains. Some of these issues created by these voltage domains are very, very difficult to simulate with dynamic simulation, like a circuit simulator. For example, take level shifters. You insert all the level shifters, and those things are very difficult to check from a dynamic simulation point of view. People are looking for special ways to identify these things. It could be a thin-oxide device, and by accident you put in the 1-plus volt domain, and then it would just die. Those kind of things are some of the new challenges we see in dealing with ultra-low-power.

Balachandran: Some of the real challenges in a mixed-signal, low-power design is that you first of all have the challenges of a traditional mixed-signal design. You’ve got an analog world and a digital world, and they don’t talk to each other, normally. And if you try to exchange file format information between the analog and digital world, it’s a lossy world, and so the analog designer cannot easily communicate the constraints to the digital designer. The digital designer cannot, again, come back with a set of constraints for the analog designer. So there are two worlds. These two worlds exist in many companies in geographically different locations. This becomes a huge challenge. Now, you throw low power on top of it, and you’ve got to not only communicate the design constraints, but you’ve got to communicate the power constraints from one world to the other. Even the way the power is represented in the two worlds is very different, in analog versus the digital. There is no notion of voltage in an RTL, versus in an analog design, where in the schematic you explicitly tie every signal to a particular voltage. It is correct by construction. And if it’s not, the analog designer fixes it in an iterative loop. Within that environment, self-check is done. In the digital environment, the whole concept of voltage came about only with the CPF and the UPF. That’s the way to communicate that. How do you tie these two worlds together? One world is all in a schematic-based environment, where the voltages are represented and connected. In the other world it’s represented in a separate file format, which is not part of the design. It’s a separate file. And the two have to interact with each other. That’s the first problem, which is the communication itself. The second problem is what happens at the interface of the two domains. You’re switching from a zero/one logic level to some kind of an absolute voltage on the analog side, and then switching back from this absolute voltage into the zero/one world. Whether it’s simulation, or even it’s in the implementation, you’ve got challenges on both sides. We discussed the simulation angle, but there also is an implementation angle. When you’re going through the design and implementing it, you have to make sure that whatever is connected is connected properly, that there are no hanging or open wires in those domains, or some wrong level shifter that got inserted. That is a very critical part. And you have to have really solid checkers. As you go through the implementation flow, it’s not just, ‘Okay, I’ve done my simulation, I’ve done a really good analog/mixed-signal simulation, and I’m done. The rest of the design is going to take care of itself with the existing tools, or the existing flow.’ That’s not the case. You’re going to have problems. So you’ve got to pay real close attention to that, all the way till the GDS is done.

SE: What typically goes wrong?

Balachandran: One of the main things that I’ve seen over and over again with mixed-signal/low-power customers is incorrect or missing level shifters between analog and digital domains, and that can kill the chip completely. It’s a very big deal for analog/mixed-signal designs. And the other part is the desire to squeeze out the volt, like every drop of power, from these designs. These designs, especially because they’re targeting the IoT space, are all running on the premise that you don’t need a battery to run it. If you can harvest energy from the environment, run it on that. Or you put it in an industrial setting, which is very harsh, and you’re not going to go there and change the battery every so often. What you need is something that sips the energy out from the device, not something that even consumes it. When you are talking about sipping, you’re talking about picowatts of power. And the customers I’m dealing with are saying, ‘We want a real accurate picture of not just the analog, because these analog designers are really, really good at cutting down the power.’ The digital designers are relying on the automatic tools because they’re not doing custom logic. They were, in the past, for mixed-signal designs, because mixed-signal designs were small. As the mixed-signal designs are growing in content on the digital side, then you no longer can hand-tweak the little bit of logic. It’s not a little bit of logic. It’s now millions of instances sometimes, and so you need a solid methodology to estimate the power very accurately and then to cut down every last bit of power from the digital logic, which is very, very important. With mixed-signal groups, the functionality of these groups lies in the organization within the analog groups. The analog designers are learning the digital flow and in some cases they have no idea how to reduce this power. They have to hire a whole team of experts. This is something I’m seeing across the companies. It’s become a really big challenge to squeeze the last picowatt of power out of these mixed-signal/low-power designs. This is creating a problem for the industry.

Related Stories
Mixed-Signal Design Powers Ahead
The design and verification of current and next gen mixed-signal designs require a deep understanding of how it will be used.
New Approaches To Low Power Design
There is work to be done in energy-efficient architectures, power modeling and near-threshold computing, but there are many more options available today.



1 comments

Kev says:

“For example, take level shifters. You insert all the level shifters, and those things are very difficult to check from a dynamic simulation point of view.”

Actually they aren’t, it’s just that the methodology peddled by the EDA companies sucks – it was invented before multi-power domain designs, DVFS etc. and they refuse to fix it (or don’t know how). There are fairly simple ways to fix the modeling so that stuff just works, but the EDA companies are hugely resistive to that since they make plenty of money the way things are.

None of the folks above have been seen at standard committees asking for fixes to the simulation languages, some have been conspicuous by their absence or roadblocking behavior.

Leave a Reply


(Note: This name will be displayed publicly)