Semiconductor’s Dinosaurs

The dinosaur represents forced change and near extinction of what was once dominant. The semiconductor industry has its own dinosaurs.

popularity

Dinosaurs once ruled this planet. They existed in every shape and form – some large, others tiny. Each adapted to its own specific environment. Some stayed on the land, others went to sea, and yet another group took to the skies. They looked like they were invincible and would be the pinnacle of the food chain. Then a cataclysmic event happened, and dinosaurs went into a fairly rapid decline. Some evolved and survived, primarily those that had taken to the skies. The transformation was not immediate, but it led to a new order being established, one that ultimately rose to become the new top of the food chain.

The semiconductor industry has living dinosaurs that are on the path to near extinction, a transformation that would have seemed inconceivable a mere decade ago. I am talking about the demise of the humble Central Processing Unit – the CPU and everything associated with it. The CPU enabled an amazing transformation of our industry. No longer did everything have to be put into dedicated hardware, but instead could be programmed in software to take on multiple personalities, allow for updates and improvements, and to make products capable of fulfilling the needs of multiple applications.

The CPU was hit by two cataclysmic events. The first happened around 2005. It was a breakdown of Dennard scaling, which showed a relationship between transistor size and power density. This was because both voltage and current scaled with length, meaning that you got more transistors without an increase in power consumption. When that relationship no longer held, it became almost impossible to build a CPU with an increase in processing capability brought about by an increase in frequency. It would have consumed too much power.

So the industry transformed by adding more, smaller CPUs. Unfortunately, the software industry did not adapt well to this change and still, to this day, has problems creating general purpose software that is adaptable to multiple processors. Concurrency is being used in some small niches and one bright spot — the GPU — is now being used as more than just a graphics processor. It’s showing up in a number of environments, such as AI and ML, which require massive concurrency and a different memory architecture.

The inability of the CPU, running single-threaded software, to change has increased the pressure to migrate more capability into hardware. In addition, an increasing number of functions within chips cannot be performed by the CPU running software. For example, with Ethernet now running at 100G/400G, even with a dedicated MAC, the CPU is only capable of handling bad packets and error recovery. It is not fast enough to do header examination and handling.

The second cataclysmic event was the slowdown of Moore’s Law. Technically, aspects of it continue to move forward, but this is not along the lines of the true notion of Gordon Moore’s observation because you no longer get more transistors for lower cost. So adding more general-purpose cores is coming to an end. Processors must become more specialized.

RISC-V is an evolution that is trying to prevent extinction. It enables instructions to be added and architectures to be modified, which enables the processor to become more efficient at certain tasks. This is perfectly fine for smaller tasks that were perhaps not bumping up against the limits to begin with. No one will complain about a smaller, cheaper, lower power controller.

But even things like GPUs are not enough of an evolution for all situations. Taking ML algorithms and putting them onto a GPU for inferencing and hoping to run them on the edge is not practical. Even they are too general purpose for the task.

Over time, increasing numbers of customizable optimized processors will appear. GPUs, TPUs, FPGAs, and many new ones yet to come, will maintain the necessary programmability but will operate with much higher levels of efficiency. Designs will start to recognize the flow of data they have to deal with, and memory architectures will change accordingly.

The CPU will never go away, but it is no longer right to call it a CPU – the central aspect is wrong. It could perhaps become an EPU – the exception processing unit, because there will always be some tasks that cannot be unified enough, cannot exploit enough natural parallelism, or cannot do it in an optimized enough manner. This is where things go beyond the primary functionality that a device is meant to have and to deal with things like errors, failures, security threats, and to manage the updates to the system plus a few other ancillary functions.

Notions of a von Neumann architecture with a single contiguous memory space will disappear as new, more optimized memory architectures are combined with customized processors. Cache also will disappear, as it is already showing itself to be ineffective for many applications and may be slowing some down. It certainly is increasing power consumption.

The days of single-threaded software running on a CPU are numbered, but because of the sheer number of them, it will be some time before we have to put them on the endangered species list.



2 comments

Gil Russell says:

Brian,
So we’re restrained by “dystopically walled engineering constraints” and woe be unto us. The idea of using “brain inspired” computing seems to be on the tip of everyone’s tongue these days and many aren’t even referencing it as being “AI” any longer which is an interesting change in the narrative due to the requirement of being able explain the action of the machine (XAI – Explainable AI – Sticky wicket that). That Deep Learning has run the gamut of its usefulness in being able to penetrate General Intelligence now perplexes the next course of action which is to “do the compute without moving the data” in a massively parallel fashion at unbelievably low voltage and power levels. Daunting? Of course what kind of fun would it be if it were not daunting?
I suggest that Pentti Kanerva may be right after all. Even the Neuroscience community has begun to validate his sense of ‘Sparse and Distributed Memory” computing in real life living creatures, albeit at an insect level (drosophila – fruit fly) they are the primitive formations of neural activity patterns from which the organisms ability to exist are derived and are found to be both “sparse” and “distributed” in nature and are highly related to our own brains through their relations to successful Darwinian engineering.
That brings us to Jeff Hawkins new “Thousand Brain” theory which proposes that the neo-cortex consists of vertically packed columns of a six layer neural construct which bind together to form a long term memory units suggests that we may be closer to a basic understanding of how the human neo-cortex works. Crude and unrefined at present this idea seems to bring correlation with much of the rest of what we know about the “neuroscience” part of the brain (remembering that a little bit of the neo-cortex looks like the rest of the neo-cortex).
Hawkins and many others now sense that we are quickly approaching a “Watson & Crick Double Helix” moment wherein this basic understanding launches a whole new wave of technology expansion.
I think they just might be right…,

Brian Bailey says:

Thanks for you comment Gil. I am not sure it is only AI/deep learning which will cause the change. von Neumann gave us a very convenient framework that allowed the industry to blossom and Moore’s law then gave us everything we dreamed of, until it didn’t anymore. I think there are many things that need to be re-examined to see if they still make sense – apart from them being low risk because they have been used with success for some many generations of products. I think the startup market understands this and the VCs do as well. Traditional semiconductor companies do not seem to be fully on-board yet.

Leave a Reply


(Note: This name will be displayed publicly)