Toward Domain-Specific EDA

Is the tools market really changing, or has this always been the case?

popularity

More companies appear to be creating custom EDA tools, but it is not clear if this trend is accelerating and what it means for the mainstream EDA industry.

Whenever there is change, there is opportunity. Change can come from new abstractions, new options for optimization, or new limitations that are imposed on a tool or flow. For example, the slowing of Moore’s Law means that sufficient progress in performance, power, or cost cannot be made between a particular version of a product simply by moving to the next node. The design itself must be improved, margins shrunk, or the product re-architected.

One such change that is starting to find its way into the design methodology is shift from static tools to dynamic ones. A static tool will look at the design and optimize it independent of any particular use case or scenario. Dynamic optimization adds one or more scenarios that are used as input to the optimization process, allowing the tools to perform more focused optimizations. This started with power optimization when performing clock or power gating, which used to be a static operation. These techniques can be improved further by knowing exactly how and when parts of the design need to be active. This also is driving the resurgence of processor design where custom processors can be created that are optimal for specific tasks.

Semiconductor companies always have created some of their own EDA tools. “In the ’80s most semiconductor and ASIC companies had their own tools,” says Simon Davidmann, founder and CEO of Imperas Software. “But then there were resource issues and customers wanted a more standardized approach. The industry transformed from proprietary solutions in the design and semiconductor companies to being an industry driven by standards, trying to build a common solution that was applicable to everybody.”

There was still room for some specialized tooling. “Every design house has some design or data management problem that is exclusive to them,” says Rob Aitken, technology strategist at Synopsys. “Sometimes, after they create a solution, they don’t want their competitors to get that, and so they keep it in house. They may have come to the conclusion that this is the only way to solve it, and it may be for any number of reasons, but eventually a more broadly applicable EDA solution could work for them.”

Tools are constantly in a state of flux. “The EDA business has to have a big enough market for it to justify investments in the tools,” says Neil Hand, director of strategy for design verification technology at Siemens EDA. “The thing that really limits that, when it comes to industry-specific, or application- or domain-specific solutions, is how generalized the problem becomes. And then the second part is what languages or capabilities exist to encapsulate that generalization.”

Some domains are large enough to support dedicated solutions. “Domain-specific is not something new,” says Tom Feist, an embedded entrepreneur and contractor for openROAD. “The FPGA industry is an example where EDA and academia have attacked this challenge with solutions that include MATLAB, OpenCL, C/C++, Python, and Simulink-based design. National Instruments with LabVIEW is another example.”

There is always a balance between specificity and flexibility. “Domain-specific systems run into an interesting overlap of technical and economic problems,” says Duaine Pryor, EDA technology consultant. “As you make them general enough to garner a market that justifies a leading-edge development, they lose the value that comes from technical advantage gained through specialization. Of course the reverse is true, as well. That propagates through the whole value chain.”

Markets and industry dynamics change. “There are some companies with a lot of resources at the bleeding edge of their domains, trying to find ways to go further than where the EDA companies can take them,” says Imperas’ Davidmann. “That’s why some companies are being acquired into semiconductor companies, where they chew up and spit bits out as a way to get that expertise in house. I’m sure that Apple’s success with its M1 and M2 is because they have so much tooling inside.”

Anyone who uses the latest nodes knows the pressures that are on them. “With semiconductor scaling slowing, or failing, there is a need for architectural innovation and domain-specific optimization,” says Zdeněk Přikryl, CTO for Codasip. “Raising the abstraction level and efficient design automation enables faster design cycles, and hence time to market.”

Also, many new technologies are being inserted into design flows. “Anytime you start talking about new technologies, photonics, for example, you may find a gap between what’s commercially available and what’s needed,” says Jeff Roane, product manager at Cadence. “But that gap is quickly closed, as soon as the need arises, to the point where it makes financial sense for one of the big players to develop something.”

It takes time to build up of the necessary expertise. “The quantum EDA field will have to cross barriers between physics and engineering,” says Mohamed Hassan, quantum solutions planning lead for Keysight Technologies. “This is a daunting task. The two fields typically use different terminologies and nomenclatures. Currently, the quantum hardware design cycle spans multiple tools, in multiple domains, in a discord fashion with multiple gaps in between that are typically filled by extra effort that is highly dependent on the knowledge and experience of the designer.”

The ESL failure
The electronic system level effort from the late ’90s was an attempt to introduce a new abstraction along with new languages. “It started out with broad goals and wound up being fairly narrowly targeted to datapath-centric and similar kinds of algorithmically straightforward design,” says Synopsys’ Aitken.

The market does continue to grow and evolve for some of the tools developed as part of that flow. “The system level co-processor hardware/software codesign and optimization does start to look more like a real disruption, but it has a real ‘back to the future’ flavor,” says Pryor. “The industry initially hit the problem when many systems — cell phones, in particular — acquired more of a heterogeneous computing architecture. Some good solutions were generated, but became niche products because of a combination of economic factors and engineering silos. Design by optimization, high-level synthesis, domain-specific languages, and other developments in the last 20 years could make this area more tractable than at the millennium.”

ESL also was deflected by the growing IP market. “Today we see this notion of tools plus IP,” says Cadence’s Roane. “You see processor IP, memory IP, interconnect IP, interfaces IP, even the algorithm stuff that’s covered by high-level synthesis today. But if you look at the kinds of designs that are really amenable to high-level synthesis, it’s algorithmic designs. The whole notion of tools-plus-IP is something that’s already in play today, and you are going to see more of that.”

Virtual prototypes hold many pieces of it together. “Domain-specific EDA may help generate parts of the virtual prototype, such as processors or other components used in the SoC,” says Codasip’s Přikryl. So, in one aspect, domain-specific EDA is enabled by the virtual prototype where each of the verticals is significantly accelerated and optimized by dedicated flows suited for those functions. If I make the parallel to the software world, we can have code written in several languages and glue everything together in the linker. It’s similar in the hardware world. We just use different integration methods.”

As abstractions get raised, the workloads become increasingly important. “Years ago, you could optimize power in the layout, and that’s all people really could do,” says Siemens’ Hand. “And then power became part of synthesis and the implementation tradeoff. Then it became part of the high-level synthesis tradeoff. Now it’s become part of the processor optimization tradeoff, and we’re going to go up and it will become part of the system-level tradeoffs.”

Those workloads are driving design practices. “The hyper-scalers are doing chip design because their specific workloads are unique and different from the targeted workloads that their suppliers are trying to target,” adds Roane. “You can do those tasks with off-the-shelf processors, but that comes at a high cost in terms of power consumption. And you’re probably not going to get the best performance compared to a custom implementation. We see a lot of hyper-scalers doing chip designs today because they’re trying to lower power and improve the performance for specific workloads that are unique to them.”

Machine learning also is creating some unique flows. “We are seeing many domain-specific architecture language being created,” says Aitken. “When you think about it from an EDA standpoint, it’s definitely an opportunity for some customized design approach, starting from the language you use to describe these things. How does a synthesis flow that’s optimized to a particular structure differ from a synthesis flow as it exists now? How do you tailor an algorithm that’s going to produce a customized block?”

Tool development
In the past, a lot of the domain-specific tooling came from startups. “They would see an opportunity where customers were demanding something that was not being fulfilled by EDA,” says Davidmann. “We pivoted from being a simulation company through to verification because of the demand created by RISC-V and the need for an ecosystem for the verification of processors. There are a handful of companies building solutions because the customers need it, but Big EDA hasn’t gotten there yet. Small companies are creating this, and over time there’ll be consolidation.”

This is also driving interest in open-source EDA. “One of the compelling reasons to use open source has been the ability to modify the tool for their special needs,” says openROAD’s Feist. “This could be for security or leveraging features like machine learning. Google has been a big proponent of open source, and it is not because the tools are too expensive for them. It is because they want a competitive advantage, and if they give their secret sauce to the EDA vendors, then everyone has it.”

One such open-source flow, shown in figure 1, has been put together by efabless.

Fig. 1: OpenLANE flow built on OpenROAD. Source: efabless

Fig. 1: OpenLANE flow built on OpenROAD. Source: efabless

Some large EDA companies are buying into this trend. “Open standards allow people to plug into flows,” says Hand. “The ability to add interfaces into tools is important, and academic collaborations are important. Traditionally, that’s one of the areas where EDA does need to improve. There have been cases in the past where there were very tight collaborations between academia and EDA. In more recent times that has gone away, and we need to get back to it.”

One driver for that may be access to data. “The hyper-scalers spend enormous amounts of time gathering data, processing data, and keeping each other from having access to theirs,” says Aitken. “In terms of chip data, consider on-die monitors. You can use these to gather information while the chip is running, and you can learn things. Big EDA is not giving you data. They are giving you a means to gather your own data and do whatever it is that you want with it. There’s a further ML-style role, where the relevant data exists both within Synopsys and within the user base. For example, when a tool or flow has a bunch of knobs, what happens when you tune them in different ways? Where do you get the best answer?”

Hand agrees. “We work with customers and have added interfaces into tools that allow them to extract information and put it into their data lake. Then they can do their own deep analysis with information about their design, and they are building their own capabilities. That may be unique to their needs, because they are taking advantage of the fact that they can apply additional information about the design. We are not privy to that information.”

The creation of tools will often require multiple people to come together. “Quantum EDA is envisioned as the software and tools that will streamline the workflow and enable the automation of quantum hardware design, whether it is based on superconducting qubits, trapped ions, spin qubits, integrated optics, or cold atoms,” says Keysight’s Hassan. “The hardware basis spans broad areas of development, from superconducting microwave circuits to optics and integrated photonics, which widens the quantum EDA opportunities but makes it challenging for a focused effort. A steep knowledge barrier keeps many engineers out of this hot emerging field, and it is very different from how the current mature EDA design cycle is devised, for instance, for designing integrated circuits.”

In other cases, an application domain places new demands onto exist tools and lows. “Autonomous vehicles, whether they be robots, cars, or planes, bring in a whole new set of requirements,” says Hand. “It adds new functional safety aspects, or a new focus on non-determinism that has to be managed throughout the flow.”

Simple changes can have big implications. “If you look at multi-die systems, where you start incorporating things beyond regular CMOS — whether they’re novel memories or whether they are CMOS from different processes — you run into a problem,” says Aitken. “You can coerce an existing set of EDA tools to work with that, and you can coerce an existing set of assumptions about how margins should work and how signoff should work. But when you get to the point of wanting to do better than that, then you really ought to rethink some of the flow in terms of how you build what amounts to domain-specific EDA for the domain of signals, power, clocks, etc., migrating across a multi-die system, inside a package. That’s a different animal than existing EDA solutions were developed for.”

This is par for the course with EDA. “With every new generation of product, whether it’s being used in new nodes, or being used in new applications, EDA gets extended and creates new opportunities,” says Hand. “The EDA industry today looks nothing like it did, in terms of its feature coverage. It’s no longer just a simulator and a synthesis tool and a layout tool. It’s gone well beyond that. We add a few more things on the bottom side and we add a few more on the top side, but it creates new opportunities for optimization, by using more information that is available to us.”

It has always been a combination of push and pull. “There always have been two dynamics,” says Roane. “One is where the EDA companies will try to predict and therefore push. The other dynamic is where their customers, the semiconductor companies, will create pull in demand based on what they’re doing. In a perfect world, both of those forces would be aligned. And to the extent they are, that spells success for that new tool or technology. But they’re often not aligned. Sometimes you wait for that perfect storm to occur.”

Conclusion
It is possible that more in-house EDA tools are being created today just because the industry is going in so many new directions. The slowing of Moore’s Law is causing companies to look at many new technologies, solutions, and optimizations, and it takes time for those needs to coalesce into something that can be covered by a standard flow. The industry is vibrant, and this is just one indicator of growth.



Leave a Reply


(Note: This name will be displayed publicly)