Solving Systemic Complexity

The number of possible interactions is exploding. So why aren’t tool companies complaining?

popularity

EDA and IP companies have begun branching out in entirely new directions over the past 12 to 18 months, pouring resources into entirely different problems than electrostatic issues and routing complexity.

While they’re still focused on solving complexity at 10/7/5nm, they also recognize that enabling Moore’s Law isn’t the only opportunity. For an increasing number of new and established chip companies, it isn’t even the best opportunity. And for EDA and IP vendors, many of the issues they are facing at the most advanced nodes, while incredibly difficult, also are well understood.

For at least the past five process nodes, tools have been keeping pace with complexity in chips. Delays in production typically have been on the manufacturing side, not the design side. This is partly due to the fact that EDA and IP vendors are now being included earlier in the process by foundries and chip companies, and partly due to the fact that the issues they are dealing with are mostly evolutionary. The real challenge for tools companies has been accelerating various processes throughout the design flow, which is why they have moved first to more parallelization in hardware, and more recently to the cloud where there basically are no physical limitations for how many processor cores can be thrown at a particular problem.

No one is abandoning this business, of course. It still puts food on the table. Complexity on-chip will continue to grow at each new node. And while there are fewer companies developing chips at the most advanced geometries, those are the chips that continue to ship in the highest volume. This includes data center CPUs and FPGAs, smart phone APUs, and machine learning GPUs. But the majority of design starts—not chips with the highest volumes—are semi-custom chips that will ship in lower volume as part of heterogeneous systems.

Whether those chips are developed on a single piece of silicon or packaged together in any one of a growing list of advanced packaging options doesn’t matter for tools vendors. But the tougher, and arguably more interesting problems, will be about interactions of those chips or packages in a system and between systems. In an automobile, this could involve the ability for one ECU to fail over into another ECU. But it also will require that fail-over system to alert other vehicles on the road that a problem has occurred and the vehicle needs to move off the road in order to avoid a massive traffic jam.

The same thing increasingly needs to happen in cloud-based data centers, smart infrastructure, all forms of motorized or electrified transportation, and basically anything that is connected to a network, whether that is in a work or a home environment. And it needs to include both hardware and software, because energy is a finite resource and all of these systems need to be made as efficient as possible.

This is one of the reasons the whole tech industry has rushed into machine learning, deep learning and AI. Pattern-based tools are very effective at solving some of these problems. The big issue there has been finding enough data for training algorithms. But as probabilistic approaches begin showing up over the next few years—basically raising the abstraction level for the algorithms so less data is required—this whole effort will begin accelerating.

The problems that need to solved here are enormous, complex, and in a constant state of change, which is what the EDA and IP industries are very good at solving. But those solutions require an enormous amount of effort, brain power and focus, which explains why there has been far less sniping going on across typically contentious markets such as EDA. Everyone is far more focused on developing solutions to an explosion of new problems than worrying about what their competitors are up to.



Leave a Reply


(Note: This name will be displayed publicly)