The design and verification of current and next gen mixed-signal designs require a deep understanding of how it will be used.
Mixed-signal devices are at the heart of many advanced systems today because of the need to interact with the outside world, but designing and verifying these systems is getting harder.
There are several reasons for this. First, almost all of these devices now have to be lower power than in the past, and in the analog space it’s not as simple as just dialing down part of a block. Second, it requires a deep understanding of the intricacies of this domain, not just the possible use cases. And finally, there is a long list of Verification tradeoffs that need to be made to get these designs to tapeout.
“Cars today have a lot of mixed signal content in them, and a lot of that is becoming low power because in self-driving cars you have this need to communicate,” said Krishna Balachandran, product management director at Cadence. “There are many sensors, and all kinds of [readings] have to be picked up. Sensors are analog components primarily, which then have a digital front end because the data that is gathered has to be converted from the analog to the digital domain. Then it has to be sent to different processing units, which are digital chips, and then some kind of action has to be taken where the signal has to be converted back into analog for the car to do anything — to make a turn or prevent you from getting hit, or perform lane assistance. All of these driver assist features being developed are all mixed-signal, and they have to be low power. This is why the whole auto industry is about mixed signal, and increasingly, mixed-signal, low-power,”
Further, he contends that essentially every mixed-signal design is a low power design, driven by the automotive space as well as the Internet of Things.
Defining the power intent of a design has been the subject of standards efforts for the better part of a decade. “Specification of power intent using UPF (Unified Power Format) has been widely adopted for low power designs,” said Mary Ann White, director of product marketing at Synopsys. “UPF allows definition of a supply network that has traditionally been used for digital designs.”
However, she said many of today’s low power designs contain more and more analog content, where designers would like to extend the supply network definition to their mixed-signal designs. The challenges associated with this is include accounting for analog supplies and voltage levels that connect to digital blocks, and vice-versa.
It’s clear that progress is being made in this area. There are commercial EDA tools from Synopsys, Cadence, Mentor Graphics and others that allow designers to specify their mixed-signal power supply network entirely in UPF and automatically handle how the voltage levels will be translated across the analog-digital power domains or boundaries. But this remains a difficult challenge for design teams.
White noted that the power supply for an analog/SPICE block can be directly controlled by the UPF specification, and the tool will automatically detect if the boundary to the analog design is a signal or a supply and insert the correct interface elements. “For example, A-to-D (analog-to-digital) and D-to-A (digital-to-analog) elements are inserted for signals; and R-to-E (real to electrical/analog) or E-to-R (electrical/analog to real) are inserted for supply connections. As supplies change, the interface element characteristic will also change, such that analog voltages which rise and fall or have on/off threshold voltages can be modeled as on/off power states.”
For years the analog/mixed signal piece of designs used the lowest power and had the most consistent power usage. “Traditionally analog circuits existed right at the boundaries of the chip where you are dealing with I/O, with the outside world,” said Drew Wingard, CTO of Sonics. “The outside world is in essentially all cases analog, and unless you care a lot about when that next input you want to receive is going to be provided, you’re ‘on,’ waiting for the thing to come, and that means you’ve got analog circuits that are dissipating power, that are waiting for something to happen, which probably isn’t going to happen for awhile.”
Cadence’s Balachandran agreed that a lot of things can go wrong at the boundaries between the analog and the digital domain. “The analog problem itself is well contained. There are tools, there are flows. People know how to design that. The digital problems have their own set of challenges, but from a low power perspective, those have been pretty well defined and they have been solved.”
However, he said that when there are wires crossing from one domain to the other, you don’t always know that a zero in the digital domain is converting to a zero in the analog domain. The reason is that the analog domain can have a negative value as well as a positive value. “What is a 1? If it’s a 1 in the digital domain, is that 0.5 volt in the analog domain, is it 1 volt, or 1.5 volts? Is it a high voltage? How do you interpret that? And then how do you convert from the analog to the digital again, back and forth, the analog conversion. That doesn’t exist in Verilog. It doesn’t exist in languages that are out there. Unless you go with specialized languages you don’t have this functionality. Even if you have a language that describes it, and you have extensions to the language, then you still have to have the tools understand how to do this conversion and not corrupt the logic or the intention of the logic as the design is being verified. So from a verification point, it’s very challenging. From an implementation point, it’s also challenging because you’ve got to have all these voltages converted back and forth and they need to be treated in special ways.
Designers also must make sure that analog constraints are passed to the digital domain, and that they are properly carried out, so to speak, in the digital domain so that the intent is kept and the signals don’t get corrupted, Balachandran added.
When it comes to verifying a mixed signal low power design, there isn’t just one way to do it, but some fundamentals have been established.
Mick Tegethoff, director, AMS product marketing at Mentor Graphics, explained that in an SoC, which is a digital-on-top type of design, the verification is probably where the most advanced feature rich mixed signal type technology resides. That is dominated by the digital side where verification technology has evolved over the years to support huge, complex digital chips. “The engineering team has to make sure that when they want to add an analog block in the verification—they want the analog block to be simulated with SPICE accuracy—they have to do a mixed-signal simulation. But even at that level they are using real number modeling.”
In an analog-on-top design, the simplest approach is co-simulation, where there is a SPICE simulator running parallel to a Verilog simulator. The Verilog is solving the digital content, and the SPICE is solving the analog content.
“Obviously a Verilog simulator is orders of magnitude faster than a SPICE simulator. So when you do not need the accuracy on that digital logic (you don’t need the SPICE accuracy because it’s all based on standard cells) and you do timing closure on it, then that part of the simulation will run very fast, even if it is very dense. Then the gating item becomes the analog simulation,” he said. “The analog blocks in the SoC are almost like analog IPs. They get designed by custom guys, simulated in SPICE, and characterized, so by the time they get to the digital they are pretty solid. On the other hand, when it is a larger analog chip, the complexity of the verification of the analog itself drives a lot of very long SPICE simulations that can take days or weeks. But if you can take anything that is digital in nature that is doing either some control logic or some setting or tuning of the analog circuit and you throw that into the Verilog simulator, that really accelerates the simulation.”
Jay Madiraju, product marketing manager for Analog FastSPICE at Mentor Graphics, that that to improve performance engineering teams can utilize modeling. “Where there is analog-on-top function in the chip, along with some digital, to run SPICE simulations at the transistor level takes forever. So for most chips, people use behavioral modeling as a tool to help them speed up their simulation performance. Those blocks that you know are working correctly, you really don’t need them to be at the detailed transistor level. You can abstract them away into a behavioral model written in any of the standard languages, and that gives you a lot of simulation performance.”
This is an improvement, but it’s still not perfect. “One of the taxes we paid in trying to implement the analog circuits in the high-volume, low-cost digital processes is that the analog performance of the circuits isn’t very good, but calibration using digital techniques was kind of an escape hatch that people used,” said Sonics’ Wingard. “The problem is that if your calibration routine ends up needing you to run the host processor, then you might be spending more power in calibration than you are spending in the base circuit itself. Again, when they first built these techniques, that wasn’t a concern, but it becomes one now. This is why you see this emergence of dedicated microcontroller subsystems to handle the maintenance of the low-power, analog/mixed-signal I/O. People talk a lot about sensor subsystems and things like that as one example where we had a lot, but it’s not just there.”
Time to market
But there is still more that needs to happen to help designers complete these complex designs in a timely fashion. One of the issues around mixed-signal circuits is that engineering teams are often afraid to shut them down because they are mixed-signal and they don’t just turn back on.
“They have to settle and essentially recalibrate, so mixed-signal people tend to be really conservative about that kind of stuff and will give you a bulletproof description of [how the mixed-signal circuits come back up],” said Wingard. “As a result, one of the things we’re looking at hard is what we can do to be less conservative there. If we put in place hardware power control, can we bring these things down and up more quickly like we are doing with digital logic? The early work there says yes you can, so you can start to teach some of this mixed-signal stuff as if it was autonomous digital hardware. It requires some partitioning — typically there is some part of the physical layer interface which still has to be alive — but there’s still good things you can do to make it work.”
He noted that these approaches are still interface-specific, and in some cases they end up being system-specific. “I’ve got this input and I know that I’m only going to need to sample once an hour. In an IoT application, I don’t need to check the temperature every minute. Maybe I can get away with only checking it every hour. But in something else you’re measuring almost continuously because you’re afraid the temperature of your engine is going cause you to blow a gasket.”
What this comes down to is a deep understanding — as with so many other application segments — of what is called the ‘duty cycle’ in the mixed signal space. This can be very different based upon the end system in which the device is targeted, so you want to be able to take advantage of that and adjust it for the operating case. “You don’t want to say, ‘Oh, because sometimes I can’t afford to shut it off, then there’s no reason for me to design the shut off circuit.'”
That doesn’t make sense from a power perspective, and designing to the worst case possible isn’t an efficient way to design a chip. But how to push the design toward much lower power more quickly and more consistently is still a work in progress, and even an experienced engineering team’s success with one design might not be the same on the next.