Big Changes Ahead For Analog Design

In-house flows are unable to keep up with foundry PDKs and heterogeneous integration, but commercial EDA tools add their own set of challenges.

popularity

Experts at the Table: Semiconductor Engineering sat down to discuss the impact of heterogeneous integration on in-house analog tools, and how that is changing the design process, with Mo Faisal, president and CEO of Movellus; Hany Elhak, executive director of product management at Synopsys; Cedric Pujol, product manager at Keysight; and Pradeep Thiagarajan, principal product manager for custom IC verification at Siemens EDA. What follows are excerpts of that conversation. To view part one of this discussion, click here.


L-R: Synopsys’ Elhak; Movellus’ Faisal; Siemens’ Thiagarajan; Keysight’s Pujol.

SE: With co-design heterogeneous integration, co-design isn’t just analog and digital. It’s package, interconnects, and data movement. How does that affect analog design?

Elhak: Analog and digital are still the key ingredients — having digital place-and-route and analog layout talking to each other. But then you also need to extend that to the package, the interposers.

Pujol: And that’s why flexibility is key in all the workflows. But on top of that, we’re also dealing with a new breed of engineers who are trained on Python and other languages, and maybe less in the traditional flows, and they’re looking to bring their Python knowledge to chip design. Some of customers are telling us, ‘We have 100 EDA engineers who can craft the flow, and you need to fit into that in so we can make this optimization on top.’ So flexibility is something we need to provide as EDA vendors so that we can fit into their workflows. They want to own it. They will rely on our optimizer for probably 80% of the work. But for some of the some of the key problems they want to leverage their knowledge, because they know that they can make a difference with that — reducing heat, reducing power — and they don’t want anyone else to have that.

Thiagarajan: There is another bigger dependency that we haven’t talked about. Analog designers struggle through the foundry PDK evolution during the development process. An analog designer maybe starts with a v0.5 PDK, they do this perfect power amplifier design — verified across corners, pitch perfect — and then they get a PDK update v0.6 or v0.7, and something changes in this device. Then the next time they simulate, everything is busted. And now design companies are trying to accelerate their tape-outs. So they’re doing early designs, but this technology at advanced nodes can change something even at v1.0, and missing yield comes right back to your analog design and it looks like an analog fault. There is a huge dependency in the ecosystem and I don’t know how it’s going to get addressed.

SE: In the past, analog designers have resisted using EDA tools because it didn’t necessarily provide them with a clear benefit. Does that change the dynamics?

Elhak: Yes, big time. Part of this transformation is happening at the traditional analog companies. We see modernization of this, even though many people who will be reading this will be asking, ‘Are we modernizing?’ But in analog this is true. Companies are moving from their in-house simulators, their reliability analysis environments, and their variation analysis tools into commercial EDA. The reason is there are new problems coming with advanced nodes that these traditional tools cannot handle — device models, variation, and different types of reliability issues that come with finFETs and advanced nodes. Updating and maintaining these tools is becoming very expensive. I’ve personally seen several of these transformations.

Thiagarajan: This is exactly why big EDA companies’ partnerships with foundries early in the development cycle is going to help.

Pujol: You’re right about the problems. The homegrown tools lasted a long time. Ten years ago, most of the problems that we were seeing in RF involved extracting a critical path. So we were talking about three or four nodes and less than 10 ports. Five years ago, that exploded to maybe 60 ports, and then 200. Now we are at more than 1,000 ports to extract the grounding. And we are not talking about really high frequency yet — maybe it’s 28 gigahertz. The frequencies will rapidly reach 300 gigahertz. At 1 terahertz that will be even worse. Homegrown flows can’t handle that well. You need to rely on databases with traceability and other features, and this is where it is difficult. They want their optimizers on Python, but they still rely on EDA tools. So they’re asking a lot of companies to have APIs in the tools to be able to put their own things in the GUI, but they still need to rely on EDA tools because everything is changing so fast that it’s very difficult to use their custom tools anymore.

Faisal: When it comes to analog design, we are short of people. There aren’t enough new analog designers being trained or coming out of school. At the same time, analog is only going to grow. So how do you solve that problem? We don’t have enough analog engineers in the world, or new ones even interested in becoming electrical engineers and analog designers. Now, on the flip side, good analog designers do their hand calculations, they know what to expect, and then they verify with tools. You can do that at a sub-block level. You cannot do that at system level. But at a key block level, they typically know what to expect, they have error bars, and then simulation comes in to verify it. Because if they’re not able to do that, then you get fresh engineers who may only run sweeps, and then they do trial and error, and more trial and error. That’s a really good quality in the lab, but trial and error at design time when you don’t know where you’re going is potentially a waste of a whole lot of time and resources.

SE: So where is most of the time spent in analog design? Is it at the front-end? Is it verification? And how does it compare to digital?

Elhak: Because of the changes we’re seeing today, the amount of effort on each of these different phases has changed. Traditionally, analog design was done quickly in pre-layout. And with designing my schematics, I am then simulating, verifying that the design is correct, then starting the layout and extracting parasitics. If there are some problems, I start fixing these problems, do more wholesale simulation, and I’m done. Today with advanced nodes, the design parameters are at the same order of magnitude as the parasitics. It’s not just the number of parasitics, which have exploded with advanced nodes. It’s the importance of these parasitics, as well. It’s not just about changing the outcome by 5% or 10%. It’s changing how the circuit behaves. The design parameters, because the transistors are very small, are at the same order of magnitude as these parasitics. And as a result, the difference between pre-layout and post-layout simulation is huge. So design cannot be done in the traditional way. You cannot verify the circuit until you have a layout, and that changes how these layouts should be done. It has to be done incrementally. You need to estimate the parasitics. You need to verify as you go in a design. Traditionally, it was very fast design and then tons and tons of time spent in verification. Now it’s shifting. The design time is increasing and verification is done as you go.

Thiagarajan: It has to go post-layout, and you have to crank it up even further. For post-layout, you have to go full power-ground extraction. It’s a must. In this era, the voltage at circuit is what really matters. If you are trying to design, say, a 1 volt power supply at circuit and you do a pitch perfect design and a block-level full simulation, and then you go up next higher level, guess what? Your C4 might be at a completely different point. It’s going to get so much IR drop that by the time you see that circuit it’s not going to be 1 volt. So definitely do a post-layout on your block, but you have to shift your EM and IR analysis way left. Typically in a design cycle, people end up doing schematics, pass, layouts, pass, and then toward the end during tapeout, they do EMIR analysis. So you find issues and it’s a rat race to the finish. You got to bring your EMIR way ahead in your schedule to make sure that you’re at-circuit voltage is there and it’s not messed up because of some interconnectivity with another block that that designer doesn’t own.

Elhak: This is a very good point. The power delivery network, for example, is getting much bigger, and so it’s harder to simulate. But it’s not just that. As you said, it’s not just a source of electromigration and IR drop problems. It’s actually changing how the design behaves. It’s a very big parasitic network that you need to take into account even for the function of the circuit. So it’s not just that it’s bigger and we need to simulate it for a longer time. We need to simulate it a lot more often than we used to do. Before it was sign-off stuff. ‘I’ll do my power integrity simulation. That’s when I need my PDN.’ Now, it’s part of the design, and it has to be simulated by the design. How to speed up just traditional transient simulation in the presence of a large PDN — and doing that accurately with the design, not as a two-step approach, as it is typically done in EMIR — is a key technology change today. You can use a GPU to simulate the PDN, for example. All these kinds of technologies are speeding up that simulation, and it’s not just because the PDN is bigger. It’s a given that the PDN is bigger, because of the amount of parasitics and because the circuit itself are bigger. But we have to simulate it all along, from the beginning to sign-off.

Pujol: It’s still layout-driven design. The schematic is nice to have, but it’s almost useless in many cases. The voltage that we used to have in the past gave you margin, but we don’t have that margin anymore. The voltage is going down, and you need to take that into account. If you’re doing only schematics, it’s almost a dead end. We talked about the need for good analog designers. They need to have the layout in mind, how it’s constructed. If you have small schematics, you will get something that absolutely will not resemble what you have in the end. One of the things that is missing is knowledge transfer. We have a lot of EDA tools, but there’s no knowledge transfer tool. You don’t have a bunch of schematics or layouts, and then you port that node-to-node. There very good tools to build that node-to-node today, with AI optimization, but the knowledge is not transferred. And that will get worse. So in the end, when we have less voltage and delay, it will be more complicated. On the RF side, we have dealt with that for years. The RF guys know what to avoid when they are putting things on the schematic because you have coupling and all those kind of issues. That’s the same approach we need to have.

Faisal: That’s a really big problem. I would expand ‘knowledge transfer’ to ‘experience transfer.’ A lot of the instincts that we have about design come from pain. It comes from struggling in the lab and simulations not converging. And the more automation we have, the further we get away from what the real issues are. So there’s a danger of new engineers being born into the world of social media and automation and believing everything, while in the real world what comes out is very different. Once you’ve had a failed chip you know that your schematic simulation is actually a lie. Some of that is hard to teach. People have to go through it.

Pujol: We need this kind of environment to be able to co-design. Without the transfer of knowledge, it’s a black box. And time-to-market will not increase. It will always shrink.



Leave a Reply


(Note: This name will be displayed publicly)