Leveraging The Past

First of two parts: Not every design today is targeted at 20nm. In fact a large number utilize the stability and reliability of less-than-cutting-edge manufacturing processes and add the latest design and verification techniques.

popularity

By Ann Steffora Mutschler
It’s easy to forget that not every design today is targeted at 20nm, given the amount of focus put on the bleeding edge of technology. But in fact a large number of designs utilize the stability and reliability of older manufacturing nodes, as well as lower mask costs, by incorporating new design and verification techniques, with 2.5D designs being a prime example.

“It’s interesting because with a lot of these older processes there are a lot of interesting things you can still do with them—if you’re willing to wring out the optimum solution that you want,” pointed out Kevin Kranen, director of strategic alliances at Synopsys. “There’s a lot of ways you can go. You can fit some pretty interesting stuff onto a 180nm or 130nm process that’s been fully depreciated and do some neat things. All across the board from the digital side you can actually fit a fairly significant processor.”

For example, companies such as ARM and its IP partners (Synopsys included) build smaller 32-bit processors that take up only a small amount of the die even on a 130nm or 180nm process, he said.

Carey Robertson, product marketing director at Mentor Graphics, said part of the lure of an older process node is the reliability. “One of the reasons why we see a heightened awareness or heightened concern about reliability is that these sets of designs are more complex than the first sets of design coming through 65 and 40. What we mean is that the first sets of designs through any new node are the leading-edge digital or the leading-edge chips for mobile and wireless and small form factor applications for low power—the low-power, high-performance typical type of applications.”

Those first-node applications have fairly consistent power environments. If they’re low power, they’re going into the lowest voltage possible. For high-performance digital, such as high-performance computing, there too it may not be the lowest Vdd but it’s well understood and not terribly complex. “You want to get the smallest chips or the smallest consuming power,” he said.

What’s different, technically, is that many of the chips coming through the 65nm and 40nm nodes now are the automotive chips. These are high-voltage chips with multiple Vdds and higher voltages. “It’s not uncommon to talk about Vdds of 5, 10 even 50 volts. And they are not operating for the leading-edge digital in a nice box or a server farm that’s air-conditioned. We’re talking about high-voltage applications that exist in high, high temperatures that are near brake pads and much more complicated environments,” Robertson said.

No matter what the node, circuit designers have always had to deal with process variation, noted John Stabenow, group director of Virtuoso product marketing at Cadence. “Twenty five to 30 years ago, some of that work that might have been a slide rule problem—that is, trying to model a device and see how it might behave in various circumstances. EDA automated some of that. In the last 20 years, the automation in the digital world has been great, but in the analog domain it’s never really taken hold.”

The amount of new technology getting used upstream comes down to that variation. And managing that variation includes simulation, using models from the foundry, he explained. Those models are robust but they represent an ideal simulation based on temperature, for example. “The first thing that designers are finding they want to do more and more is the sensitivity analysis: Where is the circuit sensitive to variation and what do you do about it? But one of the things that happens at advanced nodes is those Monte Carlo runs can turn into hundreds of thousands of runs or more, so it’s a very taxing issue for simulation.”

As EDA vendors improve the sensitivity analysis for Monte Carlo runs at the advanced process nodes, the learnings are being leveraged upstream. But that doesn’t mean there aren’t other challenges.

“When you think of someone doing a 90nm process node, or 130 or 180, they’ve been doing it today the way they did it five years ago, so when we try to introduce something like this on its own just out of the blue, we get this objection, ‘I’ve been doing this forever the way I do it. I’m successful. Why should I change?’ But when that same company—we work with companies that are going from 20 all the way to 180nm—starts to see the advantage as described with the Monte Carlo optimization, that’s when you open the door to circuit designers upstream.

Cadence said it is currently engaged with a customer looking at Monte Carlo optimization at 130nm, while Mentor Graphics has worked with TowerJazz for PERC rule checks at 130 and 180nm.

Who is taking advantage?
Overwhelmingly, the application area that is recognizing the reliability, cost and technical benefits of older process nodes is the automotive space. “We’re seeing more forward thinking on advanced techniques from the automotive folks than there are in other places,” Cadence’s Stabenow said. “And it’s not a negative, but they’re on old process nodes. Some of these guys are using 0.25-micron, 350-nanometer, high-voltage type devices.”

Paul Lai, manager of strategic alliances at Synopsys, observed that there is a huge demand for the 0.18-micron process for automotive applications where engineering teams are looking for high-voltage processes and want the chip to work for the next 10 or 20 years without breaking down.

Also in the automotive space, “One of the markets we’ve seen pop up recently is 700-volt devices for driving LED lights—and that requires a lot of onboard digital to control and manage all of the power in such a small device—power management, thermal management, and all that. We do see a lot of innovation going on with what used to be mainstream technologies driven really by new markets,” noted Ed Lechner, director of product marketing for analog at Synopsys.

Another concept that contributes to the longevity of these nodes is the interest in chip stacking or the integration of multiple individual dies, Mentor’s Robertson said. “That idea is either with an interposer layer putting multiple dies in different nodes or into a 3D stack. For the 65 and 40nm nodes, that’s where you’re going to put your more sophisticated analog parts that don’t need the 28 or 20nm density, but they will be higher voltage, probably with different thermal characteristics, and there again different reliability concerns than the first designs that came through 65 or 40.”

In all fairness, it’s not always smooth sailing as engineering teams question whether they can take advantage of some existing tools that did not exist in the past, such as to optimize, redesign or to be more efficient in the new process node, observed Dr. Shafy Eltoukhy, vice president of manufacturing operations at Open-Silicon.

This tends to be the case with derivatives of older products that already existed in the market but there is a derivative that needs different functions, needs to be moved to lower power, or needs to target a consumer market or a low-power market and the process node initially cannot give you the power you are looking for or the speed you are looking for. But in that case you probably will have to start the design almost from the very beginning because you have to port a lot of IP and a lot of other stuff in it, Eltoukhy said.

Coming in part two next month: Design challenges of working at older process nodes.



Leave a Reply


(Note: This name will be displayed publicly)