Unbundling Analog From Digital Where It Makes Sense

The shift toward heterogeneous integration and advanced packaging have changed the dynamics of mixed-signal design — and created some new issues.

popularity

Semiconductor Engineering sat down to discuss what’s changing in analog design with the shift toward heterogeneous integration and more safety- and mission-critical applications, Mo Faisal, president and CEO of Movellus; Hany Elhak, executive director of product management at Synopsys; Cedric Pujol, product manager at Keysight; and Pradeep Thiagarajan, principal product manager for custom IC verification at Siemens EDA. What follows are excerpts of that conversation.


L-R: Synopsys’ Elhak; Movellus’ Faisal; Siemens’ Thiagarajan; Keysight’s Pujol.

SE: Analog is typically focused on power electronics, amplifiers, filters, and RF. What else is involved, and how is it changing?

Pujol: The main focus for us is the reliability of analog. It’s reliability in the model, but also all the different steps. That includes electromigration, with analog as a key centerpiece between digital and RF, because a lot of the blocks in digital and RF are relying on analog pieces. So it has to be extremely reliable under all conditions.

Thiagarajan: There are two different areas that are rapidly expanding. One is SerDes design. Memory interface designs are getting to faster speeds, and so any interfacing you see from die-to-die is now headed to the next level with 3D-ICs. The channel is now the key part, because that’s also getting complicated. As you go through the interposer, a full end-to-end verification from a transmitter path all the way through a complicated channel, being received and equalized — that whole aspect of analog needs to be verified together. That’s something I faced as a designer, because we were always used to doing design transmitter plus channel. You take the outputs, and then you create the next cross-section of channels with an RF front end, and then you create the next cross-section with the equalization of the DFE (decision feedback equalizer) or CTLE (continuous-time linear equalization). There are a lot of assumptions that go in there, which may cause over-design. The second aspect is you need something that can help the designers to design and verify something that has IBIS (input/output buffer information) models, SPICE models, S-parameter models, and AMI (algorithmic modeling interface) models, which is the equalization scheme that eventually gets captured to showcase. So to me this is more of a methodology that is getting more prevalent with interface design, and it’s not limited just to die-to-die. It’s also die-to-memory, die-to-accelerator, and so forth. But the methodologies are getting more stringent.

Faisal: My perspective is a little different. I’m more of a chip designer, analog designer by trade. In my view, the boundary between analog and digital is artificial. A lot of the digital stuff that you see in advanced nodes is actually pretty analog. If you want to build a standard cell, an inverter is looking more like an amplifier. If you want it to perform at multi-gigahertz speeds, you need to have a lot of the analog concepts in mind when you’re designing it. Over the next 5 to 10 years, the boundary between what people call analog versus digital probably will get blended. Some of the digital guys will have to train in more of the analog concepts. And the analog folks will have to learn some of the software, as well as the digital, which will have an impact on performance. And then, coming in from the silicon side and the IP side, it’s really interesting what’s happening. If you want to design anything for 2nm, even digital is going to look like analog. But at the same time, with chiplets, people don’t want to redesign analog over and over. That’s one of the drivers for keeping chiplets at an older node where analog is happier, and then only move your digital logic. You still need an interface to do something, and that interface is analog. And there’s another step here. Analog is going to look more like RF. It’s already happening. When somebody designs the clock network on a big SoC, people are doing HFSS (high-frequency simulation software) simulations to design their network on a big digital chip. But that’s an RF problem. It’s an analog signal. So there is all kinds of stuff happening, and your digital team needs to put that together, and they need to learn new skills in new areas. I see a lot of blending happening now, and it will continue in the near future.

Elhak: It’s true that the building blocks of digital are becoming analog, but it’s also blurring from the other side because analog also is becoming digital. There are two main trends. One is what Pradeep mentioned about SerDes. In general, there are two types of circuits that exist on digital SoCs that are analog, if we don’t count memory. There is the memory peripheral, and there is the high-speed I/O — the SerDes. Those are designed in the same node that the digital SoC is designed on. So when the digital SoC goes to 18 angstroms, analog is going to 18 angstroms with it. These blocks cannot be designed the same way analog circuits were designed before. They have big digital content. They have firmware. They need to be calibrated for process node issues. They need to be programmed. So there is more digital content now in these analog blocks than the traditional analog. The second trend is the traditional analog, which includes amplifiers and data converters and PLLs. Analog companies are now moving these blocks from basic components to systems, and these systems also include processing and digital and firmware. The amplifier is no longer just an amplifier. It’s part of a bigger system with software, and digital for programming and controlling and configurability. So even those traditional analog blocks, like RF radios, are encapsulated by a big digital on top. That creates a big difference in how analog circuits are designed.

SE: What you’re describing is a mix of analog and digital, whether it’s big D/small A, or big A/small D, and that’s been going on for a while. But as we get into chiplets, does analog now stay as a mixed-signal type of design, or does it just become a component developed at whatever is easiest, so maybe 90nm or 65nm, as opposed to shrinking everything down to 3nm or 2nm?

Thiagarajan: Analog will always exist. At the heart of everything, you need the time-bearing voltage signals and currents, which eventually set the tone for conversion of digital bits. So analog is there to stay. Now, when you talk about chiplets, the beauty is that you now can do a transmitter design in a certain process technology and you can do a receiver design in a different process technology. And then you just have to co-verify it using a mixed-signal methodology. That’s the advantage of chiplets. It can be completely different process nodes, completely different companies that implement it. As long as you have a way to co-simulate it, then you can build any kind of multi-die heterogeneous chips that you want.

SE: But they really don’t go together that easily, right?

Thiagarajan: Not yet. That’s why you need a multi-technology simulation capability.

Faisal: I have a little bit of a contrarian point of view. People talk about big digital/small analog, but if you look at a digital chip, what are your largest networks by node? The pure definition of a network is a bunch of nodes connected together with information moving between them. The biggest one is power delivery. The second biggest is clock. And then maybe it’s your data movement. Those are analog processes, and it’s analog designers who are designing those things. So from a compute perspective you can say a lot of it is digital, but the biggest problems in the chips are analog, even at 3nm and 2nm, where it’s mostly digital circuitry. The bottleneck is electrical. Analog is here to stay, and digital will become analog before we know it. And then maybe the next step is quantum. But most of my team is analog, and I love hanging out with the analog designers because they are multi-dimensional thinkers.

Thiagarajan: Exactly. And debug is extremely important because you have these complicated MR (magneto-resonance) structures that you no longer can just simulate by themselves. You have to simulate a bigger cross-section with power network signal interconnectivity and noise on different wattage domains, exceeding it and simulating it. But if something goes wrong, what are you going to do? You need to have a Plan A, Plan B debug capability to turn on/turn off another backup mechanism. That’s going to affect your area, unfortunately. But you have to consider that as you go into these complicated, smaller processes.

Pujol: If you go high in frequency, then every interconnection becomes a problem. Even in big digital chips or boards, most of the digital problems are coming from propagation. We also need to bring in electromagnetic extraction, because that’s very important. Even for the power delivery network, you have issues with the placement of vias. Most of the digital is standardized. With chiplets, packages, and boards, they all boil down to interconnect problems and filtering problems, and a way to evacuate the heat. With a power delivery network and RF, we need to make sure all the blocks are working together.

SE: But it’s separate now, right?

Pujol: Yes, and that’s the problem. It needs to be co-designed. If you separate it, which is what’s been done for years, where you separate RF and analog and digital, that creates a lot of issues.

Elhak: With multi-die, analog can stay on whatever node it’s designed in. You don’t need to design it again. But there needs to be communication between these dies, whether that’s UCIe or some other standard. That communication is either done as analog on copper or even photonics. It may need a very high frequency analog circuit on each of those dies so they can communicate with each other. So even with multi-die, it’s not like it’s an opportunity for analog to stay on the older nodes. It’s bringing analog to the most advanced ones.

SE: And that’s a key point. You still need to move data, which affects your power budget. And if you have noise in one area, how does it affect another area? Is that still a problem? Can you buffer it in different ways than we used in the past? And if so, what impact does that have?

Faisal: Right now the way design is split is that there is analog design, digital design, and signal translation from RTL to analog to control it. That has to be a lot more involved, and even RF needs to come in. Ideally you have infinite compute and SPICE licenses, because when you put a boundary between different parts of digital and analog, you end up simplifying it, so you’re more pessimistic than not. In digital chips, when you are margining an interconnect, 10% or 20% of your cycle time is your margin. But when you look under the hood, you may be able to optimize 5% out of that. Many tens of watts of power can be saved at the chip level. But the only way to get that pessimism out is to have RF insights into that. So now, if you add 3D to it, you have another level of boundaries. It could be different process nodes. It could be completely different foundries that it comes from. It’s a thicker boundary than before, when it was just analog going to digital in the same ground for everything. How do you hook up a ground plane for a 3D-IC? Your RF return paths for the currents are going to be longer. So the boundaries are going to be thicker and the problems will be more complicated. But there also will be more opportunities.

Thiagarajan: There are a couple different ways to tackle this. Detectability is first. You have to be able to detect where the problem is in your multi-die channel. Once you detect it, then replaceability is next. It’s almost like a divide-and-conquer scheme, where if you identify this and replace it. Can you cut this piece of the die out and replace it with something else if you know there’s a problem on that die. That’s one aspect. And you can drill a little deeper within that die. Let’s say there’s a PLL that is starting to malfunction because of a localized heating area. Can you be smart and pre-design a dual PLL scenario as a backup for this? That’s an inherent thing. But if there’s a problem in the package or the interposer, then what do you do? Is there even a way to have a backup for that? Maybe you could if you had a substrate and then and interposer scheme. If there’s a problem in the substrate, you kill the substrate in future chips. If there’s a problem in the interposer, you kill that component. So you identify where the problem is, isolate that, and try to find a way to inherently correct it. But if you can’t do that, you replace it.

Faisal: That’s similar to the reliability techniques the digital guys are very familiar with, like error checking everything. That’s now going into mixed-signal design. The analog guys are leveraging redundancy and error checking everything to make sure their MR design is flawless. Redundancy is really key, but you can’t do that at 2nm because you can’t fit in any more. So it requires more intelligent design techniques.

Pujol: One problem is that the workflows are different. At some point we will need to unify some of the workflows. If you look at digital, everything is pretty much synthesized and streamlined with a lot of verification. That’s less true in analog, and even less true in RF, where it’s more artisan work. But there are things that can be taken from digital for analog, as well as RF. So we need to co-design this. But if it’s in a sandbox you will run into trouble because some of the technologies we use in digital that are proven have not gotten to RF. That means the swap-and-replace that you would like to have would be much more difficult because we don’t have a streamlined workflow that starts from digital and goes to RF. Even if analog is on top, or analog/RF is on top, the workflow that is in digital needs to be sent back to the other parts. Otherwise, with all this optimization that we need to gain, some power and remove heat will be lost.

Thiagarajan: We need AI/ML in analog.

Elhak: I agree. Verification is key here, but it starts much earlier than that. The way we are describing how analog circuits are changing, like moving up in frequency to become just like an RF circuit, moving to advanced nodes, and analog companies moving from components to systems — all of this requires co-design, not just co-simulation or verification. If you have a multi-die design, where you have analog, RF, photonics, and digital, it needs to be designed as a system.

Further Reading
Analog Consolidation Spurs New Round Of Startups
Smaller companies open the door once again to analog customization projects, which have been too expensive for most chipmakers.



Leave a Reply


(Note: This name will be displayed publicly)