The Next Steps

What it will take to meet performance, power and price goals for 3D-IC and sub-20nm designs.


By Aveek Sarkar
Remaining competitive in today’s semiconductor market means IC designers must meet performance, power and price targets for their design, regardless of the end application.

Meeting these mutually conflicting goals requires enlisting the use of several architectural and design techniques, including three-dimensional (3D) or stacked-die architectures that can help meet performance and power targets by extending integration capabilities beyond traditional system-on-chip (SoC) methodologies. The evolution of low-power design techniques such as MTCMOS, voltage islands, and DVFS over the last 10 years, along with more recent trends toward using sub-1V supply voltage levels, helps address stringent power requirements. On-die voltage regulators (LDO) are now commonly used to minimize the impact of package/PCB noise when ICs operate at reduced voltage levels, allowing designers to maintain the necessary supply voltage to meet performance goals. Successfully reducing power, increasing signal bandwidth and managing cost, requires simultaneous optimization across the chip, package, and board. For chips migrating to 28/20 nanometer (nm) process nodes and considering using stacked-die technologies, the ability to model and accurately predict the power/ground noise and its impact on ICs is essential for successful advanced low-power designs and associated systems.

Reduced supply voltages in the sub-1V range are now typically used to reduce an IC’s power consumption and to meet reliability requirements. However, the current trends in high-performance SoC designs with increased functionality (more logic per square micron area), higher operating speeds (3GHz+), and cost focus (less expensive package/PCB) lead to higher levels of noise. The combination of reduced supply voltage levels and higher levels of noise can significantly degrade the performance of the transistors or cells in these ICs. This is especially true for cells in timing critical paths or clock tree networks. Accurate modeling of the power/ground noise is therefore key to predicting both the final operating voltage and operating speeds of the device and accompanying system.

As designs migrate to more complex technology nodes (i.e., 20nm), and include even more functionality in the same piece of silicon, accuracy of the power noise simulation and the coverage it provides becomes important. Accurate modeling and simulation of the power/ground noise requires the following:

  1. Inclusion package and PCB parasitics (S-parameter or RLCK);
  2. Accurate extraction and modeling of on-chip PDN parasitics (inductance, resistance and capacitance);
  3. Determination of the chip’s various operating modes and transitions in a ‘dynamic manner’ (as compared to the ‘static mode’ in which all devices draw current during the simulation);
  4. Consideration of through silicon vias (TSVs), interposer, and micro-bumps relevant to 3D/2.5D design modeling and simulation;
  5. Ability to simulate all of the above in time-domain, considering the chip (with its switching logic), the LDO and the package/PCB parasitics (primarily inductance).

Power noise analysis is a full-chip problem. Unlike timing or cross-talk analysis, it cannot be partitioned because the noise or current flow in one part of a design may significantly affect another part of the design. This is due to shared power delivery network (PDN) routing on the chip or in the package. So full-chip capacity, along with package/PCB model inclusion, is essential for final simulation results accuracy. Having a hierarchical power grid parasitic extraction and modeling technology to help deliver full-chip capacity and performance benefits, without sacrificing sign-off accuracy is crucial.

Modeling the activity on the chip in a ‘dynamic’ (or time-domain) manner is a complex problem. The availability of ‘gate’ level test-benches or vectors is often too late for the design to use. Vector-less approaches allow designers to explore various weaknesses in their chips by using toggle and power targets. However, as designs become more complex and various different IP converge on the same piece of silicon, relying on a single vector-less simulation to identify all the issues is not practical or realistic. One approach to expand the simulation coverage is to leverage register transfer language (RTL) vectors. However, accurate logic (or event) propagation engines are needed inside the power noise simulation tools in order to leverage the RTL vectors and derive switching activity for the rest of the design. Additionally, it is critical to identify the right set of clock cycles to simulate from a long RTL VCD. To achieve expanded sign-off coverage, full-chip power noise analysis tools must provide the flexibility to include all of these various activity modes (Vector-less, RTL VCD, and gate VCD) simultaneously in the full-chip simulation – in which one block can be simulated using RTL VCD activity, another using gate-level VCD, and the rest of the chip using the vector-less mode.

Use of on-chip LDOs is now becoming more mainstream, providing a more robust power supply for noise-sensitive or low-power portions of the design. However, LDO design must consider several factors:

  1. Maintain robust output voltage for all operating scenarios;
  2. Supply the required current for all operating modes;
  3. Prevent the transfer of package/PCB noise onto the chip (and vice-versa).

Simulating a design using an LDO not only requires the complete modeling of the chip with its logic and power grid elements, but also necessitates accurate behavioral modeling of the LDO circuit. The model needs to capture all of the key operating behaviors of the LDO, including change in output supply voltage for different load current scenarios. By employing this model in full-chip static and dynamic simulations, it is possible to predict scenarios of high drop at the LDO output due to excessive or fast current draw, or from line regulation issues caused by noise on the package/PCB traces that connect the on-board power supply (VRM) to the LDO input.

Emerging chip and packaging technologies for stacked-die and 3D-IC architectures help reduce IC power consumption while enabling higher levels of integration. As designs migrate to stacked-die structures by employing silicon interposer and TSVs, native support for simulating 3D-IC power noise and reliability issues is needed. A flexible simulation environment would allow design teams to explore various design configurations, which is important for early prototyping and planning of complex structures. Having a highly versatile simulation environment would enable designers to qualify the input data, review overall design weaknesses, debug specific hotspots, and provide feedback to help develop a more robust advanced chip design.

To remain competitive in the semiconductor market today, chip designers must adopt an accurate and comprehensive chip-package-system simulation environment that models the actual behavior of an IC and its impact on the system. And they need tools that help them meet the power, performance and price goals associated with complex ICs designed at 28/20nm technologies with billions of gates reaching 3GHz+ speed.

—Aveek Sarkar is vice president of product engineering and support at Apache Design, a division of ANSYS.

Leave a Reply

(Note: This name will be displayed publicly)