Home
OPINION

EDA’s Dr. Jekyll and Mr. Hyde

The need for tool Integration and the reality of competition create a Jekyll and Hyde dichotomy for the EDA industry

popularity

By Joe Davis

The need for tool Integration and the reality of competition create a Jekyll and Hyde dichotomy for the EDA industry. Customers are demanding functionality that requires tight integration between tools even when they come from competing EDA companies. We have some examples of success, but this remains a challenging area. Customers can help or sit on the sidelines, but integration will only come where customers make it a business issue.

Whether EDA vendors like it or not, most design groups combine tools from multiple vendors to create their design flows. While every EDA vendor strives to provide a suite of tools for a complete physical implementation offering, the reality is that no single EDA vendor currently offers “best in class” tools across the entire implementation flow. In fact, there are dominant tools in many, if not most, of the positions in the design flow, and no single vendor holds a sufficiently strong position such that a majority of customers are willing to truly consolidate to that single vendor.

Figure 1 shows a high-level view of the physical implementation flow, and the de facto leader in each category. If you want to build a complete custom and digital flow using the “market leader” in each segment, it becomes a necessity to have integrations between tools from multiple vendors. In the past, this integration was achieved mainly through exchanges of standard or pseudo standard file formats, and scripting to glue the control flow together.

Figure 1: High-level physical Implementation flow with de facto leaders in each segment

Figure 1: High-level physical Implementation flow with de facto leaders in each segment

Each of the connections in this diagram requires, at a minimum, data and control communication. Historically, design data is passed in files using standard or pseudo-standard interchange formats, and tool control is implemented through user-level scripting interfaces.

Standard file interchange formats:

  1. Layouts—GDSII, OASIS, and LEF/DEF. (All three are well-established industry standards)
  2. Netlists—SPICE (each EDA vendor has its own variant, so this is a pseudo-standard), Verilog (IEEE standard)
  3. RTL—VHDL, Verilog (IEEE standards)
  4. Parasitics—SPEF, DSPF (IEEE standards)
  5. .LIB—timing information for place & route tools. (Open standard)
  6. Power—UPF (Accellera standard), CPF (Si2 standard)

For instance, users of Mentor’s Calibre verification tools have long been able to go to a menu in Cadence’s Virtuoso or Synopsys’ IC Compiler, and call Calibre DRC, which then asks the design tool to stream out a GDSII file and initiate a batch Calibre verification run. The results are then highlighted back into the layout and schematic portions of the design tool for correction. All of this is achieved with very simple, robust interfaces that are mostly at the “user” level, meaning that experienced and motivated users could create the same interface using standard user documentation of the tools, if they really wanted to do so. This type of “tool integration” is quite common, and is the simplest level of integration in EDA flows.

The next level of integration is for the verification tools to read the data directly from the design databases. Luckily, we also see this type of integration in the industry, even if it is not as well-known. The OpenAccess database is an open standard that is readily accessible to all vendors and users. LEF/DEF is owned by Cadence, but published by Si2, so it is almost an open standard. Synopsys provides select partners with access to its MAP-in libraries, which enables 3rd-party tools to read directly from the Synopsys Milkyway database. Again, as an example, Calibre is able to read directly from OpenAccess, LEF/DEF, and even Milkyway databases, and write selected results back (e.g., “back-annotate”) to these databases. This “data integration” requires the companies to exchange specifications or APIs, which is a closer collaboration than using standard file exchanges, but does not enable dynamic verification during design.

The reason that these limited types of tool-level and data-level integration have worked for a long time is that, for older technologies, the design could converge with a few loops between the design tool and the verification tool. While sign-off tools are meant to be absolutely accurate and work on large designs, they can often take hours to run and produce volumes of data. Thus, design tools use approximations of the sign-off models that, while limited, are generally very fast and get “close enough” to converge quickly. The approximate models used in the inner loops of the design tools were “good enough” so that the design could converge quickly between the sign-off verification tools and the design tool. As always, the design flow is a constant trade-off between gain and bandwidth, or accuracy and speed.

But what happens when the internal approximations aren’t “good enough” any longer? The larger the gap between the design tool’s approximations of the sign-off tool’s qualified ruledecks, the more batch run iterations are required between design implementation and verification. More iterations means longer design times, more hardware, and more designers…meaning more money. More money is good on the income side, but, in this case, we are talking design cost…so…not so good.

The only way to avoid these costs is to eliminate the gap between the internal verification and the sign-off verification. Now, it isn’t really viable to expect design creation tools to take on the full complexity and scope of foundry-qualified ruledecks, so the real answer is a tight integration between the design creation tool and the sign-off verification tools. Of course, this conclusion isn’t a huge intellectual leap, and almost every EDA company has made it. In fact, there are numerous examples of companies creating “in the loop” offerings to combine their own design and verification tools—Magma Talus and Quartz, Synopsys IC Compiler and IC Validator, Mentor Olympus-SoC and Calibre…and many more.

The reason for EDA companies to create such integrated offerings is obvious—the integrated flow can be much more productive and reduce overall cycle time and resources. You’ll notice, of course, that all of these integrated offerings only contain their own tools. If you refer back to the diagram, few of the key positions are held by the same company, which means that a customer would likely have to change one of their tools to adopt this “integrated within a single EDA vendor” offering.

One reason for only integrating between tools from a single EDA vendor is, of course, product strategy. The other, though, is that it can be very difficult to make the integrated offering actually more productive than the separate tools. A truly productive integration between design and sign-off verification tools must meet the following requirements:

Requirement Integration Implication
exactly the same results as the sign-off tool Must use the same sign-off engine
Interactions must stay in the design tool Access to tool states and control of GUI elements
Interactive speed Use in-memory data
Tools delivered independently by EDA companies Modular integration with robust interfaces

 

Clearly, it is far easier to achieve these requirements if you only integrate between your own tools. Even then, such tight integration is not simple to execute. Multiple groups within the same company don’t necessarily have the same release schedules or even product release processes, let alone such subtle but important things such as compilers and linkers, which are important if you are going to enable in-memory integration. The two biggest challenges are 1) defining a way to exchange data from the in-memory model of the design tool, and 2) the use model.

Even the first item is both a technical and a business issue. The design tool, of course, has internal APIs for data management already, but they are almost never public. How do you define, control, and license access to your proprietary database in a way that is sufficiently powerful to create a useful solution, yet still maintain your ability to enhance the tool to meet ever-increasing requirements from your own customers? Oh, and you almost certainly have competing sign-off tools, so how do you facilitate a useful integration between your design tool and a competitor’s sign-off tool and still enable your own sign-off tool to compete?

Why is the use-model a challenge? Well, it is all about control and credit. Which tool drives the interaction, and therefore “controls” the integration? Which tool is in control defines who gets “credit” for the value of the integrated tool (meaning who gets to charge for it). These integrations take non-zero work, so each company in the partnership has to ask “What’s in it for me?” You can see why tight integration between independent tools often doesn’t get off the ground.

In the custom design world, OpenAccess solves both problems rather well by defining a public in-memory API for the data model used in custom design tools. Since OpenAccess provides open access to the in-memory data, the first challenge is clearly solved. The second challenge of defining who gets the credit is addressed by dramatically reducing the unique work that is required in order to do the integration. The less work that there is to do for the integration, the easier it is to find a win/win arrangement.

In the P&R world, there isn’t such a public API, so each P&R tool currently only has a tight integration to its own sister sign-off tools. However, it isn’t such an insurmountable task to define a win/win when there is a big enough problem to be solved. This is where the voice of the customer is important.

Ultimately, that’s what it always comes down to—customer demand. What do you need in your physical implementation flow to be successful two years from now? Or next year? Or even today? What issues are standing in the way of creating better designs and getting them to market faster? It may be that fundamentally new technology is needed, but sometimes all of the pieces already exist—they’re just not put together in the most productive way. If it turns out that direct integration between independent tools is the best or most expedient answer to those questions, you, the customer, can make it happen. It isn’t as simple as just making a phone call—tight integration between separate tools and companies isn’t quite that easy!—and an investment on your part is required to make the business and technical needs clear and validate the end result. However, with a little effort, you get to be the first to benefit from the results, and maybe grab a little market advantage in the process.

About the Author

davis

Joe Davis has worked on both sides of the EDA industry—designing ICs and developing tools for IC designers and manufacturers. He is currently the Product Manager for Calibre interactive and integration products at Mentor Graphics. Joe earned his BSEE, MSEE and Ph.D. in Electrical and Computer Engineering from North Carolina State University. When he is not applying his expertise in data visualization and engineering workflow, Joe enjoys sailing, gardening, hiking, and living and working in new places and cultures.



Leave a Reply


(Note: This name will be displayed publicly)