FinFET-Based Designs: Package Model Considerations

3D transistors improve performance and reduce energy consumption, but they also add new design challenges.

popularity

The use of FinFET devices in next-generation high-performance, low-power designs is a fundamental shift that is happening in the semiconductor industry. These devices through their smaller sizes, tighter Vth control and higher drive strengths enable higher performance and increased integration while reducing overall energy consumption. But along with their advantages these devices introduce and worsen certain design challenges.

RTL-driven power analysis and reduction is becoming an integral part of mobile and other low power chip design process. Consistency of the RTL based power numbers against the final gate level power estimates is key to establishing confidence in this design for power methodology. Physical-aware RTL power analysis is particularly important for FinFET-based designs due to a variety of reasons. On the other hand, the use of power analysis to drive reduction opportunities ensures that the “power bugs” identified or the design changes suggested translate into meaningful power savings after the design is synthesized. This targeted approach helps designers focus on bringing their designs within their tight power budgets.

Additionally, these devices operate at ultra-low sub-1V nominal supply voltage levels in the presence of elevated power ground network fluctuations. Ensuring reliable and consistent voltage levels across the chip requires verification and sign-off of the power delivery network across the chip, package and board. Traditionally this verification is performed by simulating the chip or parts of the chip using a hierarchical method or a divide-and-conquer approach. But these techniques fall short in providing the desired sign-off quality results due to modeling limitations or lacking consideration of the underlying physics.

For sign-off quality SoC-level power noise analysis three key aspects of the SoC have to be modeled accurately and in their entirety: (a) underlying circuit elements, (b) on-chip power grid parasitics including those of power gates and other structures, and (c) package/PCB interconnect impedances. To ensure SPICE level correlation the time-domain switching current of the standard cells, memories and other IPs need to be accurately captured with considerations for instance specific output load, input slope, switching state and instantaneous supply voltage. The device capacitances and the associated series resistances also need to be included. Since power analysis is fundamentally a statistical problem, the switching states can be determined by vector-less techniques that highlight weaknesses in the design resulting from simultaneous switching of cells. In addition, RTL activities coming out of the RTL driven power analysis flow can be used to perform SoC sign-off.

One of the underappreciated factors that increasingly influence the accuracy of SoC power noise analysis is the correctness and validity of the package and PCB models. More often than not, SoC level sign-off analysis is performed in the absence of package or by using a simplified lumped model for the package. But for today’s complex designs that have thousands of bumps at the interface between the chip and package, such approaches can result in incorrect SoC level analysis, as well as obscure design issues. For example, “lumped” models collapse all the bumps into a few terminals or more often use just one terminal per domain to connect the package to the chip. This results in using a single “effective L” to predict the package parasitics.

But if the layout is non-ideal or segmented for a particular domain in one or more layers of the package, then the inductance that is seen by bumps in the weaker parts of the package will be significantly higher than the bumps in the other parts. Additionally power/ground bumps along the periphery of the chip near the I/O region tend to have higher inductance than the bumps in the center of the chip due to routing congestion. Also, if some of the package planes serve as an extended “RDL” for the chip especially for power gated internal domains, such lumped models can become problematic in establishing the proper circuit connections.

Moreover, in such lumped models, all the bumps on the chip side are “shorted” together into one (or a few) terminal(s) and all the bumps on the package side are shorted together into one (or a few) terminal(s). So the voltages across all the bumps are forced to be the same and the current flow gets averaged out. But if one part of the chip has much higher activity (and hence higher switching current), then the bumps in that part of the design will have much higher noise compared to other bumps over less active regions of the chip. The bumps over the active regions will also have more current flowing through them and could suffer from reliability issues.

Lumped approaches tend to give optimistic results, and once the the bump specific inductance along with the chip region specific current is considered, the Ldi/dt effect is worsened. As explained earlier, the use of “lumped” models averages and obscures design issues and prevents the consideration of the package layout as an extension of the chip design. More importantly, such a model predicts the voltage drop across the package incorrectly by averaging out the current flow, impedance and other factors.

The use of “per-bump” models in which the parasitics are captured at an individual bump level in a distributed and fully coupled model enables the desired accuracy and facilitates the identification of design issues that otherwise go undetected. Figure 1 highlights this issue from a flip chip design in which a lumped model based analysis shows “uniform” voltage across all the bumps while a “per-bump” model highlights the true voltage drop scenario capturing the variation in both the chip switching current and the package impedance across the chip.

aveek1
Figure 1: Comparison of voltage drop from a lumped versus a per-bump model.

However, not all per-bump models are created equal. Some are more appropriate for SoC power noise time domain simulations than others. Models in which the ground network parasitics are folded into the power domain parasitics or in which one of the ground terminals serve as reference for other terminals result in inaccurate SoC level analysis and complicate power noise debug since differential measurements become necessary. A ‘per-bump’ model that is more “physical” (like an RLCK versus a broadband Spice or S-parameter model), captures the power and ground network parasitics individually and maintains the fidelity over relevant frequency range, is more applicable for SoC level sign-off analysis and SoC convergence.

However, package models appear as “black-box” to the chip teams and the drop across the package is difficult to relate to the actual package layout. So the chip team mostly attempts to get a package impact number while the package team designs their package on their own. As designs move to more complex FinFET technologies such processes will have to change. Design teams will be looking at package and chip routings together and will be debugging the voltage drop across the package and the chip simultaneously and not in two separate sessions (or groups) as it is done now. A true co-visualization and co-analysis environment is needed to facilitate such chip-package design convergence.

Another aspect FinFET-based designs need to address is the increased prominence of reliability affects such as electromigration and ESD, as FinFET based designs are going to be limited more by EM and thermal affects than from other causes. More on this will be covered in an upcoming blog.



Leave a Reply


(Note: This name will be displayed publicly)