Mixed-signal/Low-power Design

Experts at the table, part 2: Getting analog and digital design teams to work together; the chip company executives challenge the EDA vendors.

popularity

Semiconductor Engineering sat down to discuss mixed-signal/low-power IC design with Phil Matthews, director of engineering at Silicon Labs; Yanning Lu, director of analog IC design at Ambiq Micro; Krishna Balachandran, director of low power solutions marketing at Cadence; Geoffrey Ying, director of product marketing, AMS Group, Synopsys; and Mick Tegethoff, director of AMS marketing, Mentor Graphics. To view part one, click here.

SE: Are there specialist engineers who have to collaborate on these kinds of designs? Is there any conflict about who’s doing what, is anybody holding up the project, something like that? Has that ever happened?

Ying: There’s an organizational issue. We talked about device models. But also, when you do mixed-signal simulation, how do you write a behavioral model for analog? At the top level, you do mixed-signal verification, you need to verify analog blocks. Who gets to write this? The analog guys are not usually comfortable in writing models, the RTL guys probably don’t understand enough about analog. There’s always a tricky thing. Who does the model creation, who does the model verification? I’ve seen larger organizations may have a dedicated group to do integration that can sort of bridge both, but most companies need an analog side or a digital side to have to do it. It’s an interesting question. I’ll turn it to you guys to see how you guys manage to create these models.

Lu: We’ve been through a few projects, and it kind of varies. Fortunately, I’m seeing a generation of engineers, a generation of analog designers who have a better understanding of the digital flow. Sometimes, they are pretty good script writers. At least in the projects that I’m running right now, I see that bridge is being sort of closed, but mainly because I have excellent analog designers who really understand digital stuff and they can do pretty good models. But, yeah, you’re right. In some of my previous projects, there was some disconnection there, and frankly, we did not have a very thorough verification because of lack of some analog circuit models. We actually were burned by that, a little bit. But, generally speaking, I see analog designers are better and better at creating models to work along with digital flow.

SE: It seems like the stereotype of crusty old analog engineers is kind of fading away.

Matthews: It’s a mix. On our side, it’s mainly the digital designers that write the models. They’re the ones that need it first. The analog guys are trying to get their layout done, while the digital guys need to simulate. Regardless of who writes the models, analog team or the digital team, the real question I have, and I would challenge the EDA industry is: How do we make sure that the model and the block are equivalent? We’ve created ad-hoc methods within our company to do that. Thorough review processes—whoever writes the model, the other guy has to sign it off, stuff like that. At the end of the day, it’s relying on humans and relying on the quality of whatever verification we created to make sure that that’s good. Just like in the digital world, you have formal signoff techniques to make sure that your RTL and your gate-level netlist and everything are all functionally equivalent. I’m not saying you can do the exact same thing on this side, but if there’s a technology out there that can get you there, that would be a big improvement over what we are today.

Tegethoff: One thing I like to think about is, if you had 10 different customers to have 10 different challenges — and it helps to kind of do a little bit of a breaking down the type of design or the type of IC people are designing — if you have a digital on top, heavy digital with analog blocks, that’s one kind of a design. And if you have a completely hand-crafted analog sensor thing going into automotive, or whatever, that has some amount of digital coming in. I see some difference between how these types of customers have to address their challenges. You have to verify the top level, yes, you have to save power here and there, but how you go about it, or who will write the models, is a little bit different. So we find different behaviors based on the type of IC. Grossly speaking, this is a mostly analog IC with a little bit of digital to interface with the digital world, versus this is a microprocessor with some analog. Another thing that’s helpful to think through, as well, and building on what both of you said, you have to think of your verification as divide and conquer. A lot of our customers will spend a lot of time characterizing a block, making sure it works across all processes, all temperatures, all operating modes, and then you’re going to go to a mixed signal, and you want to make sure the right level shifter is there, the right connection is there, and everything is there. You got to pay attention to both of those. Sometimes you may not need to run SPICE fully characterized at the mixed-signal level, the mixed-mode level. As long as you know that what you’re doing in each stage of the design is going to work together and you’re eliminating risk, as you go up higher.

Balachandran: In terms of possible solutions, one would be to use assertions as a way of trying to verify the model. Again, it’s as good as assertions that get written and it’s again prone to human error. It’s not a formal technique, to verify it, but it’s probably the best the EDA industry has at this point—the use of assertions for the final mark.

Ying: I agree. The model validation is a tough part, especially for mixed signal, because it’s vector-dependent. You’re only as good as the vector that you try to test. I’m not aware of any formal techniques, yet, unfortunately. At this moment, one of my customers is talking about design robustness. He’s running thousands and thousands of corners, just trying to cover all the grounds, or at least have confidence that this IP that he designed can be used by a number of different applications. It’s more of a brute force, running more simulations or running smarter corner simulation, to try to take care of that problem, absent formal verification.

Tegethoff: I would just throw in one more monkey wrench. For certain applications, like automotive, health care, and so forth, you have to start worrying about reliability. You’ve got to start worrying about other effects. If there is a failure because of aging, or there’s a failure because of electrothermal, or whatever, that makes things even more complicated. We see a lot of that as we go into the automotive world and health care. It’s very different than something that is just going to go on a cell phone. A cell phone doesn’t kill anyone—yet.

Balachandran: At least we don’t have proof.

Tegethoff: If it’s a car that’s being hacked or there’s some failure with electronics inside people or things people are wearing, that’s different. There are other areas that are going to keep us busy for a long time.

SE: Getting into automotive, military/aerospace, health care, there’s a lot of documentation involved. It’s a different design process.

Tegethoff: The more simulations you have to run, the more time it adds and constraint to your schedule, the more brute force is not an answer. You have to be smart about what you’re running. How much health insurance do you need? You’re trading off a low deductible. They’re making choices like that, and taking risks. It’s not just the methodology, but it’s how do you know you’ve done enough before you take off?

SE: Let’s ponder a small, very small tangent. What about software development tied in with all this? How does that figure into this mixed-signal, low-power, ultra-low-power process?

Ying: Good question. From many angles, depending on this IoT application, it could be for some security purposes. Software development itself becomes critical in those kind of applications. That’s my view. I don’t know if you guys go through software signoff or any type of rigorous process.

Matthews: From at least my point of view, what we’re seeing as the IoT is starting to take off is a shift a little bit in our customer base, from those who have a huge software organization behind them and a very disciplined approach to writing software, to folks that just want to start writing code. They don’t want to wait around. They don’t want to develop libraries. Some people call them “makers.” They just want to be able to get their products done. What we have to do to try to enable that industry is to abstract away the complexity of the mixed-signal part of the chips, we have to provide them libraries so they don’t have to know how to program a DC/DC converter. They don’t have to program the different car management pieces at a low level. They need to just fill the call—basic functions, and things like that, to transition them from different power modes or different power settings. That’s kind of what I’m starting to see.

Lu: I kind of agree with Phil on the shift in customer base. We do see the same thing. Customers, especially in China, are more and more dependent on the total solution from the IC provider. In our business, power is really the king. It’s the ultimate driving factor of everything. It’s really important to educate the firmware developer or software developer on how to program our MCU to deliver the functionality or performance with the least amount of power. You can do the same thing with all different kinds of programming corrects. But one program is the most power-efficient. That requires more education from our IC design group to the software team.

SE: How is the tool choice process changing? Are you looking to get more from your portfolio of your various vendors?

Lu: I’m in charge of analog design tools. I’d like to bring out two challenges to you guys. I’ll start from the digital. Our digital team has a very tough time to close out the timing and meet the power requirement. Our traditional approach is all timing-based. What we are dealing with right now is we don’t just want to do the timing. We also want to have the minimum kind of power consumption. The tool we have is still not there yet. Power optimization is done in the late stages of the project. If it can be done a lot earlier, along with the other kind of optimization, it will be better. I’m saying this from a layman’s point of view. I do see that as a challenge. On the analog side, we’re seeing the circuits are very sensitive to the layout. Not in the sense of capacitance coupling, but more in the sense of those kind of well-proximity factors. There’s something more that could be done here, too, to have those layout-related effects more accurately represented in the circuit. Certainly we do back-annotation. We do the close extraction. We run the simulation. Sometimes it’s because the lower-geometry processes have very stringent requirements on the density. Different layers have different densities. Maybe when we go to the top level the fills may change those parasitic effects a little bit, which could be significant for block-level circuit performance.

Matthews: The one thing I would start with, at least on the digital side, is when it comes to the tools, UPF and CPF have created a whole other vector in the tool flow. When you talk about low power, that has not been there before. One thing I would challenge the EDA industry on is to really make sure that all the tools interpret and react to the UPF the same way. I would also challenge that each company make sure that their simulation tool, that their formal verification tool, that their synthesis tool, their place-and-route tools, are all interpreting and reacting the same way—even within the same company, let alone the whole EDA industry. We’re finding that when we had to go to the low-power flow, this was a significant adder in terms of how much effort we had to spend to get through the whole process. It’s not just one level shifter inserted properly, when you have multiple power domains, so your isolation cells are all in the right place. And of course, we’re all trying to differentiate, so we all have our little tricks that we’re trying to do to try to get to the next low-power stage, which is usually something that the EDA industry is not really taught that they should try to model. We’re always trying to push the envelope in terms of what the flows can do. So, that’s a challenge, and I don’t know if there’s a solution or not. It’s a big challenge. One thing that I would say is, with a lot of models in terms of how to model the mixed-signal blocks in UPF and CPF, there is some thought that we need to do power shutoff. There’s a lot more complexity that goes on – you know, power switching. If there’s a way to continue to evolve, and provide more complex models that the tools can understand and interpret, that would be a big step forward. There are definitely tools out there that do a pretty good job of being able to do dynamic power estimation, like when you’re just getting your RTL, and you run some simulation and get some activity, load that RTL and activity information, and you’ll get a rough estimate of the power consumption from a dynamic perspective. What’s really lacking today, though, is how to do that early power estimation on your power outlet. That also leads into the analog world. Let’s try to add some more capability for modeling power. Let’s put more information into our models. At the end of the day, we still have to use spreadsheets to figure out our power estimation.

Related Stories
Mixed Signal Low-Power Design Part 1
Part 1: Adding ultra-low-power requirements to a device design is complicating the traditional process of mixed-signal IC design.
Mixed-Signal Design Powers Ahead
The design and verification of current and next gen mixed-signal designs require a deep understanding of how it will be used.
New Approaches To Low Power Design
There is work to be done in energy-efficient architectures, power modeling and near-threshold computing, but there are many more options available today.



Leave a Reply


(Note: This name will be displayed publicly)