Supporting LP In New Process Nodes

Nodes and tools, while making great strides, are not always utilized at the same pace in all situations.

popularity

Manufacturing process nodes and EDA tools are advancing all the time, but not always utilized at the same pace. And from a tools perspective, there are challenges to supporting low power in new process nodes while maintaining and improving the existing process nodes.

One way design teams address this is by leveraging the most advanced software on the less-than-bleeding edge designs.

To this end, Michael Buehler-Garcia, senior director of marketing for Calibre Design Solutions at Mentor Graphics, explained that the company’s tools generally are developed to meet the requirements of leading-edge process nodes, but the company is finding that customers at established nodes also need advanced software solutions.

One example is advanced circuit checks in the Calibre PERC tool, he said. “Initially this technology was used to make more precise checks for ESD protection and electrical overstress (EOS) in advanced chips where the gate counts and number of power domains in the device outstripped the ability of manual techniques and ‘Band-Aid’ approaches using a variety of scripts and tools not actually intended for that purpose. More and more we are seeing Calibre PERC applied to established nodes where capacity to handle huge gate count is not the issue, but rather the growing complexity of the logic design itself. For example, today many applications using established nodes also have a demand for better power efficiency, which is resulting in designs with many operational modes to enable power down for logic not in use.”

This leads to a huge number of power domains within the chip, making traditional checks for ESD and EOS extremely complex. As a result, designers may use the advanced capabilities of Calibre PERC, even when leading edge processes are not required, Buehler-Garcia pointed out. “Similarly in the transportation segments we see the need to ensure high reliability in increasingly complex ICs, even though the devices are not necessarily at the leading edge of process nodes. This results in a demand for tools to do more sophisticated circuit checking to detect design flaws that could impact long-term reliability and would not usually be found during production testing.”

Likewise in MEMS design, Mentor sees designers using advanced equation-based DRC rules on well-established process nodes. “Why? Because MEMS have curved structures, and by applying an equation to the curve (as opposed to a lookup table) you can do faster and more accurate physical validation,” he said.

In all of these cases, Mentor finds that engineering teams value a common platform to address varying needs, since it minimizes the number of flows to maintain, and reduces the learning load for the designers.

16/14nm benefits, challenges
Mentor is hardly alone in this observation. Mary Ann White, director of product marketing for the Galaxy Design Platform at Synopsys, noted that with 16/14nm finFET processes coming into production toward the end of this year, the performance, power and density are very compelling when compared to 28nm (2x better density, 30% to 50% lower total power). “However, they do bring along some challenges, which required some new and not so-new (e.g. will work on larger, more established nodes) techniques.”

She explained that one facet of the 3D effect of multi- or tri-gate transistors is that it has about 2X to 3X more capacitance than its planar equivalent, and that extra capacitance translates into more dynamic power overall.

“Leakage is very attractive on the FinFET transistor, but now it’s all about the dynamic power. Synopsys has a multitude of dynamic power optimization techniques that can help defray some of these effects. Many of these techniques were newly introduced over the past couple of years, such as more advanced clock gating and CTS, power-aware placement, support of multi-bit registers, etc. The good news is that these techniques work for the advanced as well as established process nodes.”

But what is true for most (if not all) advanced process geometries is that the metal pitches are not the same on every layer, White added. “They can be the same for the first 2 to 3 layers, but can go up to 12X or more on the top metal layers (M10+). This means that RC on each metal layer is different, so those values, as well as via resistance need to be taken into account during various optimization steps (synthesis, timing, routing, buffering). This metal layer-awareness may not necessarily be low power focused since ‘layer promotion’ tends to happen for critical nets, but it can help with overall power in the end.”

Aveek Sarkar, vice president of product engineering and support at Ansys-Apache, said the transition to 14/16nm is being driven by the leading semiconductor design companies in the mobile and high-performance computing space, but for most other applications including automotive, set-top, consumer electronics, 40/28nm continues to be the process technology of choice given their relative maturity and price point. From an EDA perspective, the impetus is not only in enabling the next generation of process node based solutions, but also propagating the innovations for the leading nodes and making them applicable in a meaningful manner for the 40/28nm technology node users.

“For example, accuracy of power noise analysis for 14/16nm is paramount given the lower supply voltage and higher power noise (or reduced noise margin from the combined effect),” he said. “So the inclusion of package model parameters in an accurate and detailed manner, often at per-bump level, is necessary to meet the accuracy requirements. But the work done to enable such a true chip-package co-design flow benefits not only the design teams on 14/16nm, but also benefits the folks working on 40/28nm. They can leverage the simplified usage methodology and the higher level of accuracy to reduce the cost of their packages and re-work their chip power delivery to eliminate over-design.”

Capacity to handle the increasingly larger designs, while maintaining the required accuracy, is another challenge that EDA providers have to resolve, Sarkar continued. “The technologies that are being introduced like distributed computing to enable the ultra-large 14/16nm designs are going to benefit both them and those on 40/28nm.”

Reliability analysis is another challenging area for EDA solution providers, he said. “Here as well, the work done to enable the complex EM and ESD rules in 14/16nm immediately benefit the 40/28nm adopters, as they can leverage the increased accuracy, sign-off coverage and analysis methodologies.”

Controlling dynamic power
Another aspect of new low power process nodes is improved leakage power, staying very close to the existing process nodes, according to Anand Iyer, director of product marketing at Calypto. “Hence, designers are not planning to use any new techniques for leakage control. But dynamic power and power density are increasing in these nodes. FinFET technologies are imposing severe restrictions on dynamic power consumption. As a result, EDA should focus on dynamic power reduction.”

He suggested two ways of controlling dynamic power. First, eliminate switching activity that is not aiding the functionality of the design. The engineering team needs to perform deep sequential analysis on the design to understand the circuit functionality and come up with enable conditions to eliminate switch activity that wastes power. Second, use dynamic voltage and frequency scaling. Here, the designers need to understand the hardware software interactions to understand where frequency can be scaled, which can be achieved by exploration for power at the system level.

The system-level, low-power view
From a system-level view, the biggest challenges are the different technology effects and how to characterize and represent them for higher-levels of abstraction. “Issues like variability introduced by manufacturing are getting far more complicated at smaller nodes and characterization is challenging. Similarly, the balance of energy consumed by leakage, dynamic switching and short circuit effects is changing from node to node significantly, in turn changing the required methodologies,” offered Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence.

In addition to these variability and characterization accuracy challenges, there is also a growing trend toward usage of sub-threshold voltages for reduced energy consumption, pointed out Koorosh Nazifi, engineering group director for low power and mixed signal initiatives at Cadence. “This trend is somewhat independent of process node and is being driven by increased demand for wearable devices. The reduced operating voltages (0.5v and below) does present added challenges on scalability of standard cell library currently offered and used by designers. Not all functions scale properly at lower voltages, and increased variability at lower voltages may require new techniques to more accurately characterize and analyze timing and power consumption of new circuit designs.”

Leveraging interfaces
Interfaces tend to cut across all markets. In fact, the most advanced high-speed interfaces are showing up on both low-power and compute-intensive applications because the need for performance is ubiquitous and growing.

Sunil Bhardwaj, director of foundry IP marketing at Rambus, said new nodes provide new opportunities, but so do demanding power envelopes. “A good example of this is the introduction of 14/16nm processes, which allows the development of 28G SerDes transitioning to 56Gbps Serial Link interfaces needed for next-generation backplane and datacenters, where mW/Gbps is as important a metric as raw I/O speed. Interfaces such as LPDDR4/3 can become an attractive option for traditional non-mobile applications whereby the performance and low-power benefits can now be realized on one process flavor.”

Finally, there is a business perspective on the use of older process nodes, to be sure.

“The uncertainty associated with the ‘path of the future transistor’ will have semiconductor companies continually reviewing their investment strategies,” said Mark Baker, director of product marketing at Atrenta. “Those observing the future trends will be looking to optimize PPA on current or legacy processes. The opportunity to serve this mainstream market lies with maturing technologies that provide productivity improvements sought by this customer segment.”

As it relates to PPA, he expects that design teams will have intense focus on the following:

  • Quality of the RTL design. Does RTL pass signoff metrics relative to structural checks?
  • Quality of the design constraints. Can timing intent signoff be achieved? In other words, does the design intent match the defined constraints as it relates to achieving timing closure?
  • Quality of the power optimization. Can the power budget be achieved? Power exploration and guidance on power reduction opportunities will be critical.

“Ultimately,” Baker concluded, “design teams will be more aggressive on process margins, expecting more efficiency from design methodologies and EDA technology. These design teams will require the capability to analyze and signoff at RTL against these objectives.”



Leave a Reply


(Note: This name will be displayed publicly)