Making Analog Easier

Pushing polygons may seem somewhat removed from system-level design, but new techniques are coming to an SoC near you.

popularity

By Clive “Max” Maxfield

I’m a digital design engineer by trade. All of those wibbly-wobbly effects that are characteristic of the analog domain make me nervous, and if something makes me nervous I tend to look the other way and hope it will go away. But analog isn’t going anywhere. On the contrary, the increasing amounts of analog/mixed-signal (AMS) functionality that feature in today’s System-on-Chip (SoC) designs are making AMS the gating factor to success.

For those of us who come from a digital background, it can be difficult to wrap our brains around what’s happening in the analog realm with regard to design and physical implementation. So, just to set the scene, let’s start by considering a high-level view of the digital SoC design flow; we’ll then contrast this with its traditional analog counterpart; and finally we’ll consider some incredibly cool “stuff” with regard to analog design and physical implementation that’s heading towards us like a runaway express train.

One of the things that characterizes the digital portion of a modern SoC design flow is the extreme amount of automation that’s involved. The whole process starts when someone gets a capriciously cunning idea as to “the next big thing,” as illustrated in Figure 1.

  

Figure 1. A high-level view of the digital design flow.

There are two main concepts that are central to the digital flow: the use of intellectual property (IP) and the combination of high-level representations and synthesis technology. Early in the process, for example, the digital design team will select a bunch of IP blocks from their grab-bag of goodies—perhaps a CPU and/or a DSP, maybe a handful of peripheral and accelerator cores, possibly some interface functions, and so forth. This IP can account for a very large piece of the puzzle in terms of the SoC’s overall functionality.

When it comes to the “secret sauce” that will differentiate this product from its competition, the digital design engineers typically describe the required functions at the register transfer level (RTL) of abstraction (the IP blocks will also typically be specified in clear, encrypted, or obstruficated RTL).

Following a quick functional simulation (yes, I really am glossing over the complexities), synthesis technology is used to translate all of the high-level blocks forming the design into their gate-level equivalents. We then have access to incredibly sophisticated technology to generate a floorplan and to place the gates. This is followed by mind-bogglingly clever automated routing and optimization. In turn, this is followed by parasitic extraction, from which values are used to further refine simulation, timing analysis, signal integrity analysis, and so forth.

Once again, the above is a very high-level and simplistic view of the process that is intended only to illustrate the extreme amount of automation that permeates the digital portion of the flow.

This style of working – the use of IP along with RTL representations and synthesis technology – makes it relatively easy to capture and implement the digital portion of the design. (By “relatively easy,” I mean as compared to doing everything at the gate-level by hand. Can you imagine capturing a large SoC design as a bunch of logic gates and then placing and routing these gates by hand? As we shall see, that’s what the analog folks have to do.)

Furthermore, this style of working facilitates one’s ability to migrate a design from one foundry to another and/or one technology node to another. If you have an existing design at 65 nm and you want to migrate it to a 45 nm process, for example, you just swap the cell library, modify your constraints, press the “Go” button, and let the synthesis, floorplanning, place, and route engines perform their magic (I know this process is nowhere like as easy as I’m portraying it here … but it would certainly appear to be this straightforward to any analog folks looking at this flow).

A Conventional Analog Design Flow

So, what do you think life is like on the analog side of the fence? What amazingly cunning tools do those guys typically have at their disposal? Honestly, when you discover the truth, it makes you want to cry. Let’s consider a high-level view of the analog portion of the flow as illustrated in Figure 2.

  

Figure 2. A high-level view of the analog design flow.

Here’s the way it goes: We start with someone having a bright idea (with regard to the analog portion of the SoC), and everyone jumps up and down saying how wonderful it is. So all we have to do is implement it.

Analog IP? Don’t make me laugh (at least not process-portable analog IP). The best we can hope for is that we might be able to re-use some transistor-level schematics as starting points for portion of the design. The rest of the design will be captured as new transistor-level schematics.

Simulation is performed using a SPICE-like simulator or a fast-SPICE equivalent; synthesis doesn’t come into the picture at all. Generating a floorplan and placing the transistors and other components is performed by hand. Similarly, routing the design is performed by hand.

If we peer back through the mists of time to the 1980s, parameters such as the sizes of the transistors and other components were specified by the circuit designers as attributes in the schematic. These attributes were then used by the SPICE simulator, and also by the layout designer, who literally generated the various components (and later the routing) at the polygon level. (Actually, this is something of a simplification, because some attributes were – and still are to this day – communicated to the layout designer as text annotations in the schematic, or via a separate text document, or by paper and pencil.)

Sometime around the early 1990s, Cadence introduced the concept of PCells (Parameterized Cells), which are described in a proprietary Lisp-like scripting language called SKILL. In this case, the circuit designers place PCell symbols in the schematic and then associated parameters with these entities.

Eventually, each PCell uses its associated parameters to automatically generate the preliminary layout for that cell. The reason I say “preliminary” is because some layout designers will eventually convert the PCell representations into their polygon equivalents (this process is known as “smashing”) and then start “tweaking” these polygons by hand. The layout designers also have to do a bunch of other stuff like abutting, well-merging, interdigitation, and row-stacking in order to create a more compact layout. And then they get to do the routing by hand.

To provide a sense of scale, if we had a 30,000-transistor mixed-signal SERDES block, for example, the complete layout for this block could easily take a team of one or two layout designers two to three months (two-thirds of this would be the placement; one-third the routing).

I don’t know about you, but this doesn’t strike me as being a lot of fun.

A Brave New World

Wouldn’t it be great if the analog folks could use some special language to specify a function at a high level of abstraction, automatically synthesize this representation into an optimized transistor-level netlist, and then automatically place and route this netlist? Well, yes it would, but we aren’t there yet, so what can we actually do today?

One technique that has been around for quite some time is to hand-create a transistor-level netlist, to somehow specify which parameters can be varied and over what range of values, to define some way to measure the “goodness” of the output(s) and other criteria like power consumption, and to then kick off a long series of simulations that sweeps the various parameters across their respective ranges. The problem is that this approach works only with relatively small circuits, and even so it can take a huge amount of time and computational resources.

One company that is doing some really exciting things in the analog/mixed-signal arena is Magma Design Automation, with its AMS design platform called Titan. Using Titan acceleration technology, designers can code AMS functions as equations. Once the equations have been captured, the user can specify a target process/technology and Titan will generate an optimized implementation for that function.

My understanding is that writing and testing the equations can add 20% to 50% to the overall design cycle, which is something of a pain. However, once you’ve done this the first time, you can re-use this function in future projects. This technique has several advantages, including facilitating architectural exploration and also facilitating the migration of functions from one process/technology node to another.

Another company that is well worth watching is Ciranova. A couple of years ago they came up with something called PyCells. These are the equivalent of PCells, except that they are captured in the open source Python language. They also have PyCell Studio, which provides a complete standalone environment for creating PyCells that can be used with any OpenAcess tool (including tools from Cadence).

Now your first reaction may be: “Ho-hum, what’s all the excitement PyCells?” Well, actually they are jolly exciting, because as part of their implementation the folks at Ciranova have managed to fully separate the design constraints from the implementation technology. And why is this important? Well, in 2008 Ciranova introduced a tool called Helix, which performs automatic floorplanning and placement of an analog design.

As part of this process, Helix automatically executes all of the tasks that layout designers traditionally perform by hand, including abutting, well-merging, interdigitation, and row-stacking… and the result is a correct-by-construction, production-ready, DRC-clean placement. (How is all this possible? Well, in addition to being fully multithreaded, Helix employs incredibly cunning genetic algorithms, but I can say no more about this because I am bound to secrecy.)

A design comprising a few hundred transistors can be fully placed by Helix in a minute or so; a design involving say 30,000 transistors might take a few hours (compare this to multiple layout designers slaving for weeks or months as discussed above). Quite apart from anything else, this dramatically changes the picture with respect to migrating an existing design to a new process/technology node as illustrated in Figure 3.

  

Figure 3. PLL netlist placed by Helix, transistors resized,

same constraints, two technology files, runtime 30 seconds.

But wait, there’s more, because I hear that, slaving away in their secret underground bunker, the boffins at Ciranova (“They don’t let us out very often…”) are currently beta testing what they are at pains to call a “Trial Router.” Basically, this Trial Router can auto-route a design comprising a few hundred transistors in just a few seconds; a design involving say 30,000 transistors might take 30 minutes or so.

Now I’m not saying that you would take the results from this Trial Router and proceed directly to tape-out. In reality, the layout designers may end up throwing a lot (or all) of the Trial Route results away and re-doing it all by hand.

So what’s the deal? Well, the point is that the designers need to get a feel for the electrical performance that can be achieved by the design and they need this information as speedily as possible. Thus, the reason this Trial Router technology is so exciting is that you can quickly extract highly accurate parasitic values for the circuit and get a real good feel for how the circuit will perform.

I mean, if you can take a design that would normally take two months to place and a further month to route by hand, and you can generate a production-ready placement in a couple of hours and then perform a first-pass Trial Route in 30 minutes. Tell me that this isn’t exciting (I won’t believe you).

And as for the future…

As the famous American inventor Charles Franklin Kettering famously said: “My interest is in the future because I am going to spend the rest of my life there.” So what will the future hold with respect to analog synthesis?

Well, some folks think that true, top-to-bottom analog synthesis is a pipe-dream. Over the years there have been some amazing failures in this area, with companies claiming all sorts of things that never came to pass.

What about the technique of specifying a transistor level netlist and then varying a bunch of parameters to determine the optimum circuit configuration? Well, this really falls under the heading of “optimization” rather than “synthesis”.

A slightly more sophisticated approach might be to specify a function in terms of its transfer function and to use a computer to sift through hundreds or thousands of different topologies playing with the parameters for each topology. (When you come to think about it, this is really just circuit optimization with the addition of a couple more parameters.)

I think that the folks at Magma are doing some very interesting work and have certainly solved part of the puzzle, but (trust me on this) they would be the last to use the term “analog synthesis,” which is regarded as bad karma by the majority of analog designers.

Meanwhile, the folks at Ciranova have taken the approach that computers are great at performing “grunt work”, so they’ve automated the drudgery of placement and (“Trial”) routing. Although this doesn’t sound glamorous, it’s actually a rather amazing achievement.

But as for the ability to create a high-level specification and then synthesize a circuit topology… well, I’m not so sure (please feel free to quote me on this).



Leave a Reply


(Note: This name will be displayed publicly)