Elasticity Without Compromise

Compute power to infinity and beyond.

popularity

At the dawn of the twenty-first century, the available RAM on a physical machine dictated, and ultimately limited, the size of designs that Ansys HFSS could simulate. This prompted engineers to buy incredibly expensive pieces of hardware — well into the six figures — to solve their most challenging problems.

By necessity, engineers learned to “divide and conquer” their largest, most challenging designs by splitting a model’s geometry up into multiple regions, then stitching the results back together at a later stage in the process. This “divide and conquer” technique proved error-prone and fundamentally compromised the model’s fidelity since not all electromagnetic coupling was considered.

What if there is another way to reliably run extremely large, highly accurate electromagnetic models to deliver critical design data that improves end products?

More compute power, less cost
Thanks to Ansys HFSS’ distributed memory matrix solving technology (DMM), design sizes are no longer dictated by the amount of memory on a single machine. DMM enables engineers to optimally leverage elastic hardware infrastructures by networking together multiple machines to solve their largest problems. Engineers now use HFSS to solve larger and more complex models then ever thought possible — without compromising accuracy.

Instead of buying one expensive workstation with huge memory, users can chain together smaller, less expensive workstations in a standard cluster through a parallel elastic machine configuration. This greatly reduces hardware costs, making large scale simulations much more accessible.

For example, to solve a problem that requires 128 gigabytes, an engineer could network together four workstations that are equipped with 32 gigabytes each. Due to the nonlinear cost of RAM, the spend for those four smaller machines is significantly less than buying a single large 128 gigabyte machine.

It also creates tremendous flexibility to solve a wider range of problems. Connecting two machines can solve a problem roughly twice as large. Connecting four machines can solve an even larger HFSS model. And by adding even more, the sky’s the limit. Today it is not unusual for HFSS users to solve problems that are TBs in size, simulating a design that just a few years ago would have been impossible.

Gearing up
Running a HFSS simulation across your workstation fleet couldn’t be simpler. After installing the HFSS software on each of your machines, simply designate those machines’ cores to run the entire simulation. All the machines collaborate to solve the problem, using an integrated auto-HPC algorithm. An even better workflow is realized by integrating HFSS with a high-performance computing (HPC) scheduler like Windows HPC, which automatically identifies resources and queues simulations to run on such large compute clusters — all via an integrated “submit job” launch dialog.

Using this powerful HPC solution, customers are solving designs that are much larger than they ever thought possible before.

Enter the cloud
But what if your organization cannot afford the hardware cost of multiple computers or is looking for a more cost-effective solution? No worries, HFSS’s parallel elastic machine capability is easily leveraged on Ansys Cloud, built on Microsoft Azure. This launch to Ansys Cloud is fully integrated in the familiar HFSS desktop from a simple “submit jobs” dialog. Having ready access to this massive compute power has proven its value in the wake of the paradigm shift imposed by the COVID-19 pandemic where most engineers are working from home, lack on-premise computational power and need instant access to massive amounts of hardware. Together, HFSS and Ansys have made this a reality.

This helps engineers — working at startups to large multinationals — reliably solve incredibly large and complex models with golden fidelity while eliminating physical equipment and storage device expenses. What’s the endgame? More engineers can innovate, dream big and accomplish tasks never thought possible, leading to end products that perform better within their system at lower cost.

For example, semiconductor engineers analyzing an integrated circuit need to model many complex interconnect layers and access foundry technology parameters that are typically encrypted. The Ansys RaptorH product provides an optimized platform for launching HFSS specifically for on-silicon analyses. It includes full native handling of GDSII data and access to encrypted foundry tech files for quickly and easily running golden HFSS on chip data. It supports the same distributed HFSS elastic compute capabilities to simulate even the largest semiconductor designs.

HFSS’s elastic machine configuration on Ansys Cloud helps engineers solve far larger IC, package, board, and system designs without the compromises associated with capacity-driven partitioning. This, in turn, enables more aggressive design rules, smaller design margins, and a more compact and powerful design. What does that mean? Engineers can incorporate more functionality into a smaller footprint, achieve lower power consumption and design more powerful solutions for their markets at a lower cost.

Using Ansys Cloud has never been easier. After “swiping your card” to purchase Ansys Elastic Units, you’re off and running as your web browser connects to a new, yet familiar interactive HFSS user interface where you can upload your design, run your HFSS simulation and perform post-processing.

For a closer look at cloud computing for electromagnetic designs, learn more here. For a deep dive into all things Ansys HFSS, check out the many articles, white papers and webinars here.



1 comments

pete gasperini says:

Very informative, Matt. Thank you. 🙂

Leave a Reply


(Note: This name will be displayed publicly)