Part 3: Why a simulation-driven product development process is necessary.
In the first two parts of this series, we reviewed the challenges design teams face as they grapple with increasing power consumption, tighter schedules and the drive to reduce costs. Both a top-down and a bottom-up analysis framework were proposed to help control these challenges. In part 2 of this series, specific challenges were outlined including power budgeting, power and signal integrity, device reliability, thermal stress and electro-magnetic interference (EMI) / electro-magnetic compatibility (EMC) compliance. The consistent theme connecting these problems is that they cannot be solved by any one team. They require a unified and collaborative approach across multiple teams, even if they are separated by geographical or organizational boundaries.
If one team elects to meet the power budget by reducing supply voltage by 200mV, then another team must ensure that system performance still can be achieved at reduced voltage levels. Thermal planning needs to start from the beginning of the design process, feeding data back to both the system and integrated circuit (IC) design teams. The traditional silo-based, single physics and single component-driven design methodologies must be replaced by a simulation-driven product development process that provides a strategic and creative approach to consider the inherent multi-physics nature of electronic products.
Why Is This Approach Needed?
In part 1 of this series, we looked at an example of a tablet computer and its underlying complexity. Delivering a competitive and commercially viable tablet in the market implies that various design aspects need to be comprehended much earlier, before any of the sub-components are manufactured for prototypes. Simulations have to be performed in order to validate individual sub-components and their interactions with each other. The design of these individual sub-components (e.g., an IC) requires multiple discrete steps that need to be verified separately. These steps include power analysis at the register transfer language (RTL) stage, power noise simulation for the system-on-chip (SoC) and for the system, near- and far-field EMI analysis, input/output (I/O) dual data rate (DDR) timing, and thermal profiling (see analyses examples shown in figure 1).
Figure 1: Various simulations needed to design and verify a tablet device.
These simulations individually focus on specific goals that are relevant for a particular sub-component to optimize an associated set of design parameters. However, the accuracy and validity of the results from any of these simulations rely on the correctness of the models and data generated from the other simulations. For example, performing an accurate system-level EMI or power integrity analysis requires that representative electrical models of the ICs, package(s), board, and cables are available. Accurate voltage drop analysis for a high-performance mobile SoC requires detailed and accurate models of the package and the printed circuit board (PCB). For the former, chip-aware system analysis is a required methodology. For the latter, a system-aware chip simulation framework is necessary.
Depending on where a component or subsystem resides in the scope of the overall system, the simulation framework can be very detailed to provide the highest level of accuracy at the component or subsystem level, or more abstracted to provide a higher level of visibility into the performance of the entire system. This is important because at the system level, consideration for other effects that may not manifest at the component level has to be taken into account.
Let’s think about how this approach is used in the design of an “antenna system” for phased-array radar by considering its hierarchical parts. Within this system there is an individual antenna element (flared notch or Vivaldi), assembled into an array. The array antenna is mounted within an aerodynamic radome that may have a frequency selective surface (FSS) applied within its surface. Finally, the radar system is mounted onto an aircraft.
While designing the individual antenna elements, very detailed analyses are done to ensure this component can meet the standard requirements. Once these elements are put into the array, the analyses must be repeated to ensure the interference does not degrade the antenna elements’ behavior. Then at the radome or aircraft level, more abstract models are used to predict how the array will behave while operating at Mach 2.0 speed for example. At this time, other effects such as the impact of thermal signature on the EM profile of the system need to be addressed. All of these simulations must be done in a virtual environment, enabling optimizations across multiple parameters, to meet various operating standards prior to the creation of the first physical prototype.
Similarly, when designing the power delivery network of an IC, start at the intellectual property (IP) level such as the ARM processor or the L2-cache. To validate the inner details of the IP, the dynamic voltage drop analyses must be very detailed, down to the lowest level of metal and span multiple use models and scenarios. When moving to the SoC level, which may employ multiple cores, detailed simulations can take much longer or may be computationally unfeasible.
Abstracted models of the cores or IP must be created in a way that preserves the fidelity of the analysis (current flow through the shared package and re-distribution layer, or RDL), and at the same time allows a quicker turn-around time. This enables more analyses to be performed and gains greater sign-off confidence by validating the proper connections of the IP and cores at the SoC level, their coupling to each other, and the impact of the package on the IP and cores.
In moving to the system level, the SoC-level view is considered too detailed and requires more compact models. However, these models are still required to capture the electrical behavior of the IC and demonstrate results similar to those obtained from a detailed simulation. These SoC models must be application-specific, based on the system-level analysis being performed, whether it is thermal, EMI, power or signal integrity.
The IP, the SoC and the system design often are supplied by different business units or even different companies. The data sharing between these teams must be enabled to preserve and protect their IP while also providing sufficient enough data to allow for accurate simulations at the next level. Once these individual teams acquire the confidence in knowing their IP is protected, they are more likely to share and collaborate.
Summary
A simulation-driven product development process is required to meet the challenges of rising power consumption, tighter design schedules and shrinking product margins. A methodology in which both top-down and bottom-up considerations are incorporated is necessary. To make this methodology successful, a strong collaborative framework needs to be adopted by all the involved parties. That helps ensure the data and specifications can be protected and shared in an effective manner to drive convergence the co-design and co-analyses need to solve these growing challenges.
Leave a Reply