Architects Firmly In Control

New chips show multiple levels of innovation, with AI thrown in.


Moore’s Law isn’t dead, but it certainly isn’t what it used to be. While there may be three or four more generations of node shrinks ahead, the power/performance benefits of scaling are falling off.

This is evident in new chip architectures that were introduced at this year’s Hot Chips conference. Originally started to show off the latest CPUs and co-processors, in past years the focus has been about how to build the fastest chips on the planet. Many of those ended up in supercomputers or were being developed for some futuristic technologies.

This year marked a significant change in focus. While there was still a speed-demon factor, the emphasis this year was on mainstream chips that will end up in smart phones or edge devices, or in supporting communication chips just above the edge. The amount of amount of data being processed is skyrocketing, but the power and thermal budgets haven’t changed. The only way to solve that is through architecture.

Architects have been handed the keys to the design kingdom, and they have delivered in multiple areas. For starters, they have broken chip functionality into functional pieces rather than by processor type. This is a big shift, and it’s a recognition that just throwing more compute cycles at a problem isn’t the best path forward. Revving up processor frequency is costly on multiple fronts, including power and performance.

A better approach is to start from the type and movement of data, and to localize processing wherever possible. That strategy includes more processing per cycle, less accuracy whenever possible, and different ways of reading and storing data in memory. In effect, this is increasing the density of data. So rather than trying to process each bit as fast as possible, that data can be grouped together and prioritized.

This leads to the second big change, which is AI oversight across many processes within a chip. Rather than some scary humanoid-like intelligence, AI can be broken down into smaller chunks with very specific jobs. The key here is to fit functionality into a statistical distribution and to control that distribution as tightly or loosely as necessary. For a critical operation, that distribution may be a much tighter than for a less-critical operation, and that critical operation may take out-of-order precedence over anything else.

In this way, AI acts like decentralized logic pieces, all feeding into a more centralized logic scheme. And this leads to the third major area of innovation, which is a system-wide distribution of processing and memory. That includes multiple processing/memory/AI components on a single die, in a package, in a device, and between devices.

The challenge will be to keep these architectures flexible enough to be able to add in new features, processes and innovations as time goes on. Industry consolidation and the maturation of different architectures is very efficient from a cost perspective, but it is inversely proportional to the amount of innovation that can occur without that consolidation.

The semiconductor industry is at the very beginning of a brand new cycle of innovation. The immediate problems that need to be tackled are clear enough. Much more data from many more sources needs to be processed so that people can interface with machines in more natural and predictable ways. And machines need to be able to negotiate with other machines with pre-defined behavior patterns.

The question now is what else can be done with these new architectures and approaches and what the new metrics will be for success. So now that architects have the spotlight, what other magic tricks can they perform?

Leave a Reply