Different design perspectives begin creeping into analog world as systems engineers enter the market with big-picture ideas and training.
The analog/mixed signal world is being shaken up by a mix of new tools, an influx of younger engineers with new and broader approaches, and an emphasis on changing methodologies to improve time to market.
Analog and digital engineers have never quite seen eye-to-eye. Analog teams leverage techniques that have been around, in some cases, for decades, while digital teams rely heavily on the latest technology available. While they have co-existed in mixed-signal environments, they have largely gone about their jobs independently.
That is changing as a new generation of engineers enter the market, offering new approaches to design and creating disruptions within semiconductors and tool providers alike.
“It’s the changing of the guard of the people who are designing electronics,” said Darrell Teegarden, mechatronics project manager in the SLE systemvision.com team at Mentor Graphics. He said there is a step-function changing happening now as a number of changes converge on design teams.
“Engineers graduating today are a different breed than when we graduated,” Teegarden said. “What they expect, the way they work — all of these things are very very different, and that’s changing a lot of the dynamics both of the tool companies but also maybe more importantly for semiconductor companies. They have to be system engineers. The courses they are taking [in college] now are holistic. You don’t just focus on transistor design. The senior projects they are doing are entire systems of hardware and software and sensors and actuators. It’s stuff that connects with the Internet. They want to make self-driving cars as a senior project. Their ambitions are almost as ridiculous. They plot these amazing things because they don’t know any better and they have expectations to go with that. It’s that design perspective of whole systems, and then it’s also the expectation of how the world works, like stuff should be free. This is a challenge for companies. But that’s part of the disruption, and with disruption comes opportunity.”
To understand where this is headed, and how it impacts analog modeling, it helps to look at where analog modeling is today.
“If you talk in the context of pure analog Verification, analog modeling refers very often to Verilog-A, where engineers aimed to create a model for an analog block that could describe the behavior of this block and some of its non-linear effects,” said Helene Thibieroz, senior staff marketing manager at Synopsys. “That was strictly for analog or RF — in the context of analog verification—and not meant for mixed signal.”
Some engineering teams have taken system-level approaches doing co-simulation with Matlab to try to do some higher level of modeling, she explained. “If you now extend the concept of analog modeling to mixed-signal, where you have a need to create a behavioral model for speed and accuracy, different standards were created. The first one was Verilog-AMS, which is a language including a mix of constructs from analog and digital. This approach was first primarily used by analog designers aiming to extend their flow to mixed-signal.”
The problem is that, unlike digital tools and standards, analog has never been scalable. Thibieroz explained, “As the Verilog-AMS model is parsed by a mixed-signal simulator, the design code is split internally into a digital portion to be handled by a digital event-driven simulator, and an analog portion to be handled by an analog circuit simulator. The result is typically performance speed-up with reasonable accuracy. Verilog-AMS has, however, several limitations that have made adoption of this language challenging, especially for modern SoCs: it is hard to scale as you need both expertise and model calibration, you need people’s expertise to create those models (someone understanding both analog and digital languages as well as the block behavior they are trying to model) and you need to calibrate those models versus their SPICE counterparts for each process or technology change.”
Real Number Modeling provided the second generation of behavioral modeling where digital simulators model the analog behavior in the digital domain using discretely simulated real values. The end result is a considerable speed-up, but lower accuracy. “For example,” she said, “Real Number Modeling does not accurately represent models having significant feedback loops. As a result, this approach has been adopted for functional verification only but not for modeling high-precision analog blocks. Real Number Modeling also has some existing languages limitations, mostly the lack of support for user-defined types that can hold one or more real and user-defined resolution functions, and no true relationship between current and voltage. So to remove those limitations, a new modeling approach, System Verilog nettype was created: it provides the required enhancements for modern mixed-signal SoC verification (for example user-defined types that can hold one or more real values) so provides the same performance gain with more accuracy. However for all of those models, there is still a crucial need to validate those models versus their SPICE counterparts.”
For this reason, it often becomes challenging for digital verification teams to just rely on those analog models while doing verification. Interactions between analog and digital have become more and more complex and a behavioral model may not be able to fully represent the true interaction of the analog block with the rest of the design at the SoC level.
Depending of the design application and the need for accuracy, Real Number Models may be able to be relied on, or certain blocks may need to be included as SPICE blocks, Thibieroz pointed out. She pointed to certain users that employ only real number models for their analog blocks as the complexity of their analog block is minimal and their transfer functions fairly linear. “For critical blocks that need SPICE accuracy or relate to power management, other users choose not to use analog models but simulate directly with the SPICE analog blocks to get the true behavior of the circuit and therefore fully capture any interaction between analog and digital.”
Those analog blocks are then integrated in the digital verification methodology using technology that allows the designer to extend digital concepts such as assertions and checkers to analog — resulting in a true mixed-signal verification where both digital and analog are being verified simultaneously. In this vein, Qualcomm will present a flow they developed using VCS AMS at the AMS SIG event in Austin next week.
The value of languages
Still, Teegarden has a high level of confidence in modeling languages. “Today, you have ubiquitous kinds of SPICE variants that have all of the advancements in technologies that were available in the early ’70s, and amazingly there is still a lot you can do with SPICE. But we are way past that limitation and the requirements are much greater than that now. The modeling languages are a big help for that. These languages have been around for a while and they are just finally delivering.”
But it also requires a lot of effort to use them effectively. “If you see what people actually do right now, on the IC design side people who do IC design, it’s not that VHDL-AMS or Verilog-AMS have really run out of steam,” he said. “It’s that the natural inclination is to say, ‘If I’m going to model a combination of SPICE level and HDL and RT level – people just don’t use those languages in between because that’s a lot of work. It’s not because the languages are not up to it that they don’t do that work. It’s because it’s a lot of work. You’d rather just throw hardware and time and model the transistor level in SPICE, model the digital stuff at the RT level, and then mash it together. So the languages have been there. They just haven’t been used because of those issues.”
On the developer side, people that are trying to make use of the stuff — that’s where the sweet spot is for these languages, Teegarden suggested. “For one, the IC guys don’t want to put the transistor-level IP out in a model on the Internet. That’s not a good idea. And even if you did, it’s too slow. It’s not an impedance match for the things you need to solve, and that’s where the hardware description languages are at their best. It’s really the business model that isn’t working — the technology is fine.”
To be sure, this is a complicated task. From the very outset of the design, the engineering team has to know who is going to do what, and what parts need modeling, and in what way.
“It’s an issue of the investment because traditionally people try to run a lot of simulation and directed tests but they don’t have a good measure of the ,” said Mladen Nizic, engineering director, mixed-signal solutions at Cadence. “And they don’t have the feedback loop saying, ‘I need a test that would increase my coverage, which manages my risks.’ In other words, if I’m writing another test, I really don’t know whether I’m adding much into my verification overall. That’s where coverage and metric-driven methodology is important.”
How does that apply to analog? “Four or five years ago, when you mentioned to an analog guy coverage and metrics, they’d look at you and roll their eyes and say, ‘What’s that?’ Today, if you read industry and conference papers from users, you see a lot of engineering teams using assertions in analog, and behavioral models at the transistor level in the context of overall mixed-signal verification, and collecting coverage and doing verification planning and test development to improve and increase coverage. That’s really a good step,” he said.
The problem with models
But one traditional obstacle in applying this methodology is that the models are needed. “Especially in analog, where a lot of designs are still done bottom up, it’s easy to plug in my transistor-level description for analog blocks,” Nizic explained. “But that slows down simulation so much that trying to functionally verify a complex chip with all these different power modes and operating modes is not really practical. So I need to write a model. Now, who writes the model? Who is responsible to write the model? How easy is it to write the model? How should I write the model so it’s reusable? And then, how do I make sure the model is kept up to date with any changes in the spec, if it is top-down or bottom-up? We see a lot of users initially hesitant, but as they realize the benefits of a metric-driven approach, usually there is a champion that learns the languages, learns how to code the models, learns how to set up model verification, and then it proliferates. Often, design and verification teams either have a very specialized modeling team that works with the rest of the designers to come up with these models, or sometimes designers themselves create some of the models.”
Thibieroz agreed. “Traditionally, you will see an analog and digital verification team but not someone being a mixed-signal verification manager, i.e. the person that is going to be the link between analog and digital and has understanding of both domains. The analog team would create analog models to represent the various SPICE blocks and characterize those using a classic analog verification. The digital team will adopt the analog models to include them in the top level simulation with very little knowledge on how those blocks were created or calibrated. The problem with that is that there is very often a disconnect, as those analog models are going to be used by the digital team in a context of a digital verification, which is not correlated with analog verification. So the test conditions at the top level are different than those at the block level. The model itself may not include all possible interactions between analog and digital occurring at the top level. As such, you are starting to see more and more collaboration between analog and digital teams as there is a growing concern about accuracy and calibration of the models they are using.”
Tool vendors have been working to understand these interactions and provide solutions. Mentor Graphics has its systemvision.com, which has analog/mixed signal sensors and actuators to leverage VHDL AMS. For Synopsys, it’s VCS-AMS. And for Cadence, it’s Virtuoso AMS Designer.
Along with the new standards work currently underway, the analog/mixed-signal design space is changing rapidly. Old is meeting new across the digital-analog divide, whether they want to or not.
One of the problems with simulators like VCS-AMS (and similar products) is that it’s a “two simulators in a box” implementation, rather than a unified simulator as indicated by –
Thibieroz explained, “As the Verilog-AMS model is parsed by a
mixed-signal simulator, the design code is split internally into a
digital portion to be handled by a digital event-driven simulator, and an analog portion to be handled by an analog circuit simulator.”
You can also tell by the fact that there are multiple simulation products, and VCS-AMS is a stitch of two. If you had a unified simulator there would only be one.
The problem is then that the digital portion is usually the framework for the simulation, and it doesn’t know how to handle analog things, so you are screwed with respect to doing discrete modeling of analog things in a (digital) top-down flow – i.e. power modeling doesn’t work because (say) wreal/real-number models only handle V or I, and not V & I together, and power is V*I.
It’s a fixable problem, but EDA companies prefer to sell multiple products over just licensing features in a unified product, so it probably won’t get fixed until the open source community starts using Xyce as a framework for mixed-signal with (say) Icarus Verilog or gHDL.
Thank you very much for your insights, Kev.