Speeding Up Analog

Experts at the table, part 1: The difference between analog and digital engineers; integration issues; modeling analog.

popularity

Semiconductor Engineering sat down to discuss analog design and how to speed it up with Kurt Shuler, vice president of marketing at Arteris; Bernard Murphy, CTO at Atrenta; Wilbur Luo, senior group director, product management for custom IC and PCB at Cadence; Brad Hoskins, director, IC design, microcontrollers at Freescale; and Jeff Miller, product manager at Tanner EDA. What follows are excerpts from that discussion.

SE: First of all, how are the design processes for analog different from digital techniques?

Hoskins: I manage analog, digital and system on chip design teams so I see the different designers, and they are quite different, that’s for sure. The processes and the nature of the designs is quite different, which presents a lot of challenges when you are integrating analog IP into system on a chip. We are in a more digital world — so the analog design has to coexist with that, and there’s a lot of generating views and models of that analog IP to go into products. That’s where there are a lot of challenges for us because that’s where those two different design worlds really come together.

Luo: There’s a lot of opportunity for speeding up analog. I see it in two spaces. One is on the implementation side, which is really the design, the layout, and the implementation — and then verification. Historically, people who called it simulation were trying to steer the story more toward verification because that’s where a lot of productivity boost is going to be where, if we can start taking advantage of what the digital guys have been doing a lot, historically: the metrics-driven verification concepts using assertions, using coverage analysis. The analog guys just ran Monte Carlo and looked at waveforms, and then what? Then you run out of time because there’s too much stuff to look at. How do you build smarter testbenches? How do you reuse the components of the testbench? On the verification side there’s a big area for improvement that I think a lot of companies are addressing. On the implementation side, classic automation is still there. People want to do place and route for analog, and historically it’s just been that the aesthetics just don’t look good. People want manual-style-looking layout, matched wires and everything, but they are very nit picky whereas you ask the digital guys, ‘Have you actually zoomed into a little area to look at one of your millions of routes?’ They’ve never looked at that. As long as it meets timing and DRC, they’re done. I think there’s going to be a slight shift toward that mentality in the analog side too.

Miller: I certainly think there’s a possibility of that but I think that one of the biggest issues here is that the analog design teams have a very traditional approach. There’s not been a lot of change in analog design — it’s been the gray-bearded wizards of the IC world. The problem comes in when, you go to these automation flows, and these things that would be great from a tool provider end of things — it’s very hard to get those adopted on the design side. We’ve tried a lot of things in that way and what we’ve come back to is that accelerating what already exists, taking the existing flow and just trying to make it smoother and more efficient is the approach we’ve been able to get a speed up. Back to what [Luo was] saying, if it doesn’t look right, ‘this is junk.’ If you take two analog designers and show them each other’s work, they’ll probably say, ‘That looks like junk.’ You have to let them have complete control and practice this black magic that they’ve learned. It’s a real challenge to get any kind of fundamental change with those designers.

Shuler: Our product and everything we deal with is 100% synthesizeable RTL but how we end up dealing with the analog world is for our customers. The first place we saw this was digital baseband modems, where there is more and more analog integration into that. Now we are seeing it on the IoT side of things, so it’s more of a mixed-signal type of thing where those graybeards are starting to interact with the SoC guys and there’s some culture clashes and some methodology clashes — you’re trying to make a soup but the flavors don’t always complement each other, to say the least.

Murphy: I’m also from the digital side but we look at analog integration issues, AMS (analog/mixed-signal) integration issues into the SoC, and we actually just completed a survey where we are trying to understand what can the RTL tooling side usefully do to help with that process. Maybe nothing but maybe it can do something. What we heard pretty consistently from everybody we’ve talking to is, we kind of know what we’re doing in the analog side, we kind of know what we’re doing in the digital side — but the intersection of the two is challenging because it’s not just that everybody speaks a different language, of course that’s true, but it’s because the standards on the digital side have evolved pretty significantly: you have Liberty, you have UPF/CPF, SDC, blah blah blah. You’ve got a three letter soup of all kinds of standards that are not necessarily well understood by the analog guys who are really, really good at doing what they do but now they’ve got to come up with a correct representation of something in certain formats — and that’s sometimes where things break down. They don’t quite get that right.

SE: What are the details of the techniques used to speed up analog?

Hoskins: Drawing on the example of verification and functionality is modeling of the analog for the digital or chip-level simulation is one of the areas that has a lot of promise for productivity. We’ve got such a range of different analog blocks, type of blocks, type of modeling you could apply to them, so it’s not very standardized — that’s an issue. You quite likely want a different level of modeling depending on what you’re trying to achieve, and certainly we have probably started out with a very detailed level of modeling when analog was first being co-verified with digital. The trend seems to be to go toward something that is more focused on throughput and speed of simulation and modeling at the level that gets you that speed up. We’ve had issues over the years with the level of modeling or how we’ve done the modeling for that — that’s an SoC productivity bottleneck.

SE: Specifically, how do you model it?

Hoskins: Traditionally, what we’ve often had is an analog designer creating an AMS model. Not always the analog designer — I prefer the working model of having not the analog transistor circuit level designer create a model, but someone else creates a model with a view to what’s needed for verification, and what level of modeling and functionality is needed the level above. It is an issue for us getting models developed, and then if you have different levels that you want to use something at the block level, something at the chip level — the level of detail is different. One model or one type of modeling doesn’t really solve it for both — that’s where we do have an issue. We need the very detailed modeling — obviously we’ll still be at the SPICE level. Other times we’re at a level where we don’t need to see a lot of functional interaction with the analog blocks. And then there’s some stuff in between. There’s a few different levels at which to tackle it. The testbench thing was a good point: sophisticated testbenches are generally used in digital verification as opposed to very ad hoc unsophisticated simulation environments that are used in the analog world by traditional analog designers, and then the new guys who come up get taught by the guys who teach them how it was done for the last 10 or 15 years.

Luo: I agree there are different levels of abstraction you have to work out. You’re doing the co-simulation: transistor, post-digital, and you work your way up to incorporating a real number model of some sort of that analog block so you can accelerate the simulation a thousand times, ten thousand times — and then layer on top of that the verification techniques. Layer on top of that the smart testbenches, the coverage analysis, constrained random stimulus or whatever techniques you want to incorporate, but you have to take those steps — and ideally that testbench works with either level of abstraction, so you get the same testbench but now you’re doing the more detailed stuff . Then I can flip it over and do the real number model based version. The challenge has been how to get that model. When we talk to customers, everybody is different — could be the verification engineer building it, could be the analog guy, could be a semi-automated way — we’ve built some tooling around that, but not everyone uses it. That’s the challenge: the analog guy doesn’t want to be a software programmer even though he may be kind of equipped to do it, that’s not his thing. But the verification engineer kind of is more of a software guy, so how do we impart just enough analog knowledge to the verification engineer so he can build a good model.

SE: What are the standards for analog modeling?

Hoskins: There are standard languages for modeling, and there are multiple standards but trick is how to apply that standard to a given level of functionality for the analog block because you have a voltage regulator that puts out voltage in response to different power modes; you have a PLL that generates a clock; data converters and PHYs — they are very different types of analog.

SE: Is it theoretically possible to have a single standard way to do that for analog?

Hoskins: Yes, in theory. You need a standard language for sure, so that you can work with the tools, and work with the methods. What’s the right standard language? Maybe we’re hitting on it for today’s types of products because we know today we’ve got vastly digital, the analog has more functionality than it used to have perhaps, but it’s still limited compared to the digital; the scale of analog hasn’t increased so much as the scale of the digital. So for today, real net type modeling that’s very much Verilog based is probably the right one but 10 years from now might not be because of the nature of the products perhaps. I think we’ve been doing too much analog heavy modeling by analog designers over the last years — that the standard we used wasn’t quite the right one, and the other ones have evolved there relatively recently – the real number modeling. I think we’re getting there.

Murphy: I think there’s an opportunity maybe to try and standardize some of the parametric stuff so everything that you guys have been talking about has been somewhat functionality, which is of course, extremely important. The parametrics are also important and I still see a struggle for analog designers to understand all of these long list of three letter standards that the digital guys are pretty familiar with. Could that be consolidated in some manner like a meta format or whatever, in the AMS model, and then have that drive the SDCs and various other pieces of information that the digital flow needs? Is that even the right way to do it? I was thinking, you could put the power stuff in there too — wait a minute, we’ve already go two power standards, why do we need yet another one? Somehow, either the analog guys have to become familiar with all these standards or you have to find a way to collapse it down into one thing they can understand, they can put in one place, and then have the standards derived from that.

Miller: There’s a bit of a concern there too; there isn’t one way to do it even for a particular design — when you’re simulating a PLL, you might have three or four different models of it. Just for one part you might have three or four different models because you might need a real value model when you’re doing top level simulations, but you’ll probably want Verilog A or Verilog AMS continuous time simulation model of it for more detail, and then of course all the way down into the transistors. Even within this one use case, we need three different standards — and then you need someone who understands the analog well enough to understand all three of those and how they fit together into the overall system — that’s the real challenge.

Shuler: Each one of those models is created from scratch by somebody based on a spec.

Miller: Sometimes you can go top down through and say, this is the spec and you implement the spec in decreasing levels of abstraction until you get to the transistors. More often I see it going bottom up. In the analog at least they are so used to transistors down, then simulate it a whole bunch, and then generate the next level up, simulate that a whole bunch, and generate the next level.



Leave a Reply


(Note: This name will be displayed publicly)