Best practices for dealing with co-dependent effects and making in-design changes.
We find ourselves at an interesting point in the evolution of semiconductor technologies electronic systems as a whole. As Moore’s Law continues to charge forward, tools and workflows continue to adapt to major trends, including:
The increase in design complexity and size has been well documented over the past few decades, directly related to the ability to pack more functionality onto chips as transistor feature sizes get exponentially smaller.
Chip and electronics system designers have traditionally required simulation tools to keep pace with these larger, more complex designs by increasing runtime efficiency and capacity with more efficient algorithms, compute technologies, and novel ways of approaching design flows.
More recent tool requirements originate from the need to keep up with new paradigms in multi-physics and multi-domain interactions, not only taking interdependent effects into account in a unified simulation, but also being able to translate the resultant sets of simulation data into meaningful insights, which in turn result in actionable design changes. For example, one needs to be able to correlate the results of a thermal simulation involving a package, heat sink, and air flow to the chip’s final susceptibility to wire/via electromigration and to visualize the different culprits leading to the ultimate violations.
This is an example of one physical effect (heat) having an adverse effect on another physical property (electrical reliability of wires and vias). Changes to the chip’s power grid topology, changes to the package/heat sink/ambient temp, or changes to the activity distribution of the on-chip logic gates could all influence the final outcome. Quite a few variables to take into account! Narrowing down root causes and making meaningful changes aren’t possible without a pragmatic turnaround time for each design change as well as being able to assess the multi-domain effects together.
Multiple domains once treated as entities to be signed off independently (e.g., timing and dynamic voltage drop, or the chip design and package and board design) are now interrelated and require co-simulation. Even within a chip itself, the analog and digital sections can experience power noise coupling through the power grid or substrate, or the switching voltage waveforms of I/O signals on the periphery can be impacted by the core’s power noise as well as the printed circuit board (PCB) traces.
In addition to being able to realize these co-dependent effects through simulation, designers can greatly benefit from the ability of quickly iterate “in-design” changes without being limited by the runtime hit often associated with sign-off checks. Using ANSYS solutions, signoff-quality features that may have traditionally highlighted potential issues near the end of a project can now instead be incorporated into the design process itself, helping to identify and fix as many issues as possible and produce a better design for the sign-off stage. Getting feedback on design changes within hours instead of days changes the nature of the design flow itself, and ultimately enables minimal over-design and cost savings.
ANSYS has pioneered approaches to address these diverse needs, leveraging over 14 years of market focus in the area of chip power integrity and reliability, and chip-aware system analysis for nearly 5 years. Learn more about how in-design analysis, together with distributed computing and co-analysis capabilities are employed in the upcoming Webinar, Accelerate Sign-off Convergence Through In-design Analysis, to be held on Sept. 14 at 4 p.m. EDT (1 p.m. PDT).