Three major phases in the development of EDA tools and the need to adapt to modern computing fabrics.
Cadence has a new white paper out on Computational Software. I’ve written on these topics before, most recently in the posts:
To set the scene, here is the abstract from the white paper:
Electronics technology is evolving rapidly, becoming pervasive in our lives. There are more smartphones in use than there are people on earth, driver assistance is now common in automobiles, commercial airplanes have increasingly sophisticated infotainment, and wearable health monitors exist for a plethora of missions. Every device is generating and communicating massive amounts of data, including audio and video, creating an explosion of information. Even with today’s technological innovation, there is not enough network bandwidth, cloud compute, and storage to capture, process, and analyze all this data.
If I had to trace the development of EDA tools on a decadal timescale, I’d say we have gone through three major phases.
Phase 1
Phase 1 was state-of-the-art algorithms encapsulated in tools: the best routing algorithm, the best timing algorithms, the best synthesis algorithms, and so on. Often these tools were the only product of an EDA company created with experts in that domain. Semiconductor companies had large CAD groups (hundreds of people) who would take these tools and develop flows around them with scripting languages so that they could actually create a chip. This era was known as “best-in-class point tools”. A second part of phase 1 was when companies like Cadence acquired some of these companies to create a broader product line. For example, Cadence acquired Tangent for gate-array and cell-based place and route (P&R), Gateway for Verilog simulation, and Valid for printed-circuit-board design (Allegro). It had developed in-house (going back to SDA and ECAD days) the Virtuoso environment for custom and analog layout, Dracula for DRC, and some other products like Symbad that turned out to be dead ends. There’s not a firm division between the phases in my story, but at the end of phase 1, the EDA industry consisted mostly of a number of companies like Cadence with a fairly broad product line (including some major holes), plus a lot of single-product startups.
Phase 2
Phase 2 was to recognize that the state-of-the-art algorithms had to start working together. For example, timing-driven P&R requires a timing engine to be part of the tool. This caused problems for the large CAD groups when the timing engine in the P&R was different from the timing engine in the simulator or, once it came along, static timing analysis. The tools were not integrated, but it was obviously unsatisfactory if the P&R thought it had met timing, but the timing analysis did not. After all, how do you tell the P&R to try harder? As it happened, even more so with synthesis, you overconstrain the problem as if you are negotiating with the tool. You ask for 600MHz and hope to get 500MHz. The solution was to have common engines for common functions so that there would only be one answer to any given question. “A man with one watch knows what time it is, a man with two watches is never sure.” However, that was a massive undertaking, akin to changing the oil on a car without stopping. Changing, say, the timing engine in a synthesis tool without breaking it is not a simple undertaking. At the end of the second phase, EDA tools generally had shared placement, timing, extraction, design-rule checking, and other engines.
Phase 3
Phase 3 is where we are today. Processor speeds capped out and processor companies delivered increasing power through multi-core. Some EDA algorithms could take advantage of this fairly easily (many DRC rules can be checked independently, for example). But many, notably simulation with a global concept of time and causality, struggled. Large semiconductor and system companies put in place server farms with tens of thousands and then hundreds of thousands of processors. EDA tools had been created in an era where the main way to get more compute power was to wait for a faster processor. Now, with big data centers and multi-core, there were vast amounts of compute power available but in a way that the design tools had not been architected to take advantage of. Phase 3 would be the repartitioning of these tools to leverage modern computing fabrics.
Traditionally the EDA process was split into different steps: synthesis, placement, routing, extraction, timing signoff. With common engines, these worked together better than when all the engines were different in phase 1. But a better way was to partition the design, as much as possible, so all the phases interfaces tightly on common data structures, but didn’t handle the whole design. When that works, a big design can be scaled to many cores/servers. Some algorithms are still really hard to parallelize, such as placement which is inherently global, at least at the start of the process. But other algorithms are more straightforward. For example, detailed routing on one part of a chip doesn’t really interact with routing on another part of the chip once the global router has divided up the task.
Even interactive programs benefit from this tighter integration, being able to open an editing window on a chip from inside the package editor, or vice versa, without having to explicitly read and write files, and switch tools.
Another part of phase 3 has been the addition of deep learning approaches to guide the algorithm selection under the hood. A lot of running EDA tools, towards the end of the design anyway, is looking at the results from one run, tweaking a few parameters, and then doing another run. Deep learning allows the tool to tweak the parameters itself in an intelligent way. In this case, intelligent means learning from other similar designs with the same design style at the same company and also learning from earlier runs of the tools on the same design.
So that brings us to the three key innovations of modern (late-stage phase 3) computational software:
As it happens, I went to the San Diego Automotive Museum in about 2000, back when I last worked for Cadence. We had a big internal engineering conference at a hotel in the area, and we had dinner one evening in a couple of the museums in Balboa Park. This car could not only change the oil without stopping, but you could also change a wheel or fill up its huge fuel tank (the trailer behind, which also held water and oil). Even…gasp…make a phone call (on what has become known as 0G, radiotelephones). It went 6,320 miles without stopping in 1952, refueling on airport runways alongside a moving fuel truck.
Once again, the whitepaper can be downloaded here.
Leave a Reply