Methodologies And Flows In A Rapidly Changing Market

The problem isn’t the tools. It’s the rate of innovation.


A growing push toward more heterogeneity and customization in chip design is creating havoc across the global supply chain, which until a couple years ago was highly organized and extremely predictable.

While existing tools still work well enough, no one has yet figured out the most efficient way to use them in a variety of new applications. Technology is still being developed in those markets, and that evolution/revolution is happening at an astounding rate. AI and machine learning were considered distant future technologies until a few years ago, when machine-generated algorithms suddenly pushed everything into the mainstream. AI/ML/DL is now cropping up everywhere, and that trend shows no sign of abatement.

The emphasis here is on systems, with chips being developed as customized engines to make those systems work. While new markets and heterogeneous chip designs are making the jobs of architects and chip companies the most interesting they’ve been in decades, they’re also in an almost constant state of change. So far, there is no agreed upon best way to build an AI chip, or a chip with an AI component.

The challenge across multiple industry segments is that data movement needs to be understood from a larger system level, but it needs to be designed in at the chip level. This is true no matter what the end application, which is one of the reasons that systems companies are taking over chip development in some of the hottest growth markets. The starting point is the data, not the chip.

That has big ramifications for chip design, because many of these applications require custom or semi-custom designs. So what works in one market may not work in another. This is why there are an estimated 30 startups working to improve performance by 100X or more for very specific applications. There are some common problems, though. One of the big issues for all of them is utilization of compute resources. While it makes sense to split up processing across specialized compute elements and memories—it’s too resource-intensive and slow to move a growing volume of data—keeping the various processing elements busy and data flowing is a system-level problem.

But with data processing and movement now the central concern, chips are no longer the starting point. And while this is efficient from a systems company standpoint, it creates a disconnect with the semiconductor side. Chip companies now have to build chips that fit into more abstracted system-level architectures, and at that level, a minor shift in data flow can create massive changes in how chips are designed.

Flows and methodologies can cost millions of dollars in big chip companies. Silos build up around these flows in order to make the whole process more efficient. Changing them can create upheaval inside of well-run chip companies, where the goal has been to cut costs while still maintaining predictability, manufacturability, and an acceptable level of yield.

The key now is how well chip companies can adapt to this changing world order, where modularization and flexibility are required—even though they run counter to every efficiency model developed for the semiconductor industry over the past half century.

Leave a Reply

(Note: This name will be displayed publicly)