Back To The Future

More functionality, updated processes and a push for more reliability are changing the design process as far back as 180nm.


The push to the next process node typically has meant that designs get simpler at existing and older nodes because the process technology is more mature and there have been so many chips developed at those nodes—many billions of them—that every possible corner case has been encountered hundreds, if not thousands, of times.

That all makes sense in theory, but several key things have changed:

  • Well-tested processes are being tweaked to reduce voltages and leakage current, which has roughly doubled at every new node.
  • More functionality means that even at well-established nodes, some of the same techniques used at advanced nodes, such as multiple power domains, now need to be part of the planning process.
  • Many of the chips being developed at these established nodes are being used in safety-critical applications, or in automotive applications where they are expected to be more reliable and last for a decade or more. That makes the verification much more stringent, and what used to be done on a spreadsheet now requires some of the most advanced methodologies and tools available.

Despite all of this, building at established remains the simpler, faster, and less costly option for chipmakers that aren’t required to move to the next node for reasons such as form factor, density or because their competitors are all moving forward. But there are fewer companies that can afford this option. The economics of this decision are largely based on either massive volume to recoup NRE or price insensitivity, such as a server chip. But even there numbers of companies moving forward along  is dwindling, while the number of companies working at established nodes is increasing.

“Not many people are going to migrate to the most advanced nodes,” observed Radhakrishnan Pasirajan, vice president of silicon engineering at Open-Silicon. “That will require multi-patterning, and most companies don’t need it. For many applications, 28nm is fine. For some, 130nm is okay.”

Pasirajan noted that most of the process changes at established nodes are refinements to enhance performance or lower power. “We’re also seeing interest in FD-SOI, and we’ve done a pilot run for that. From a power point of view, it’s very good. You can operate at multiple voltage levels. At 28nm, your power is less for the same complexity, and it’s about 20% fewer masks than moving to finFETs. The only limiter is that it is not high frequency.”

This is particularly good for applications and many vertical markets—think automotive, industrial, medical and even consumer applications—where performance isn’t as critical as a computer processor.

“The key is don’t count these nodes out yet,” said Mike Gianfagna, vice president of marketing at eSilicon. “There’s a higher level of integration than what we saw before. This isn’t your grandfather’s process technology where there was one power domain and fewer blocks. We’re seeing the integration of MEMS sensors, power controls, fusion sensors—which at this point aren’t well understood—and we’re still seeing new yield learning on older processes. But even with all of that, it’s a lot less engineering effort to turn out a chip than at 20 or 16nm, which makes it very attractive.”

New pitfalls
This isn’t all as simple as it sounds, though. One of the big changes at established nodes involves more voltages. While it still isn’t possible to use .7 volt memories with a 130nm process technology, there is an effort to lower the voltage even at older nodes.

“We do see foundries tightening margins and lower voltages,” said Steve Carlson, group marketing director at Cadence. “We’re also seeing design teams using more advanced, tools at these nodes, especially on the validation side as more technologies are being combined, as well as system-level analysis, timing and planning. That could include flash, RF and logic, along with more high-speed IP. If you can lower the Vdd on IPs, that’s a big deal at these nodes. But if you lower the voltage and try to use the same cell, some of the cells bill break, so you need to tighten the margins.”

This is good news for the EDA industry, because it means a growing number of the most advanced tools and new IP are being sold at established nodes, where historically there has been only limited growth.

“There is a lot of very complex work going on at established nodes,” said Mary Ann White, director of product marketing for the Galaxy Design Platform at Synopsys. “TSMC is releasing ultra low power versions of 55nm and 40nm. Ultra low power is extremely popular there. This is a change from a design perspective because the IoT is all about power. That means you need to do things at these nodes that you didn’t do before, including multiple power domains and dealing with larger designs even at established nodes. We’re also seeing hierarchical flows at these nodes, which never happened before.”

Still, as Carlson noted, lowering the voltage also can have an impact on reliability. You can’t take voltage down to 0.6 volts or 0.5 volts at 90nm. But you can push it further than it was pushed in the past.

“Going from 1 volt to 0.8 volts, it will still be reliable,” said White. “Going from 1 volt to 0.7 or 0.65 volts, it won’t work.”

Making all of this work means the tools also need to be tweaked to deal with these issues. It’s a lot of work on the part of EDA companies, even though engineering teams typically don’t see it.

“Most of the tools issues are inside baseball,” said Joseph Sawicki, vice president and general manager for the Design-to-Silicon division of Mentor Graphics. “We might qualify the parasitics at 28nm and tune for a new process, but the customer never actually sees that. There is three months of pain on our side.”

Companies wouldn’t undergo that pain if they didn’t see an upside, though. “It does look like established nodes will have longer lives, and 28nm is the poster child for that,” said Sawicki. “It is not a huge market yet, there is a noticeable impact on our business. What’s also true is that we’re seeing designs happening faster, but they are better and more complete and more reliable.”

Rethinking the system
With this push into more complex chips at established nodes, there also is another mindset that’s required, though. While people have been talking about SoCs for the better part of a decade, the reality is that they really only have been mainstream for the past few process nodes. Turning older nodes into SoCs is as significant for companies just moving to those nodes as it was for early adopters of this approach—with a twist. The tools and the methodologies are much more established, and everyone understands where the corner cases are and what to do about them.

“In the past, there might have been two or three microcontrollers, whereas now they’re designing this all as a system,” said Kurt Shuler, vice president of marketing at Arteris. “So you may have one ARM CPU doing three different tasks for functions that were previously in three MCUs. You certainly need to manage the power domains and frequency domains, and we’re seeing a lot of attention being paid to that. The consumer electronics guys are used to that, but the industrial and automotive guys are not.”

There are distinct advantages to using established nodes for even some advanced functions, though, particularly for those kinds of markets.

“This is a direct result of using IP assets that don’t need a super-highly integrated chip,” said Drew Wingard, chief technology officer at Sonics. “And they fit nicely into a 65/90nm fab with mixed signal. What’s happening now is that foundries are back-fitting technologies for leakage—40nm was not great for leakage—and the characteristics of designs are different than what the process was originally optimized for. So you’re seeing different technologies on older nodes, and more aggressive dynamic power management. But they’re also licensing more and designing less.”

But Wingard also noted that it isn’t so easy to make this work.

“You can run into issues, if you’re not careful, as to how to hook all this stuff together,” he said. “In addition to that, mixed signal IP has to be redesigned, so there is an ecosystem cost. You need to characterize and optimize the IP to take advantage of all the new things. There’s a lot more AXI around than in previous years.”

Time will tell just how fast companies continue to migrate to the most advanced nodes, and potentially new architectures such as 2.5D and 3D IC, which can take advantage of a variety of process geometries. Timing on that remains fuzzy. But what is becoming clear is there is a renewed interest in moving in a different direction, or multiple directions, and none of them is in line with Moore’s Law.

Leave a Reply

(Note: This name will be displayed publicly)