Chiplets have captured the industry’s imagination, but without being defined in a way that resembles the 7400 logic family of the 1980s, progress will be slow.
When I was 18, and just been accepted at Brunel University in West London to start my undergraduate degree in electrical and electronic engineering, I sent off a letter to Texas Instruments telling them about the journey ahead of me and asked if they could they send me a copy of their TTL Data Book. A few weeks later a package arrived and there it was. This incredible brown/orange book, thicker than a regular paperback and with a hard cover on it. It was a first edition, and these go for a fair amount on Ebay these days! Why was I so taken with this book? Well, it was the bible for electronics. What I didn’t perhaps fully understand at the time is that it was the socket definition for the PCB era.
Every part in the 7400 series performed a different function, but there was also a lot of commonality associated with them. The packages were in a range of fixed sizes with pins at a defined size and pitch. They had common arrangements in that power and ground were always in the same place. In other words, they had a large range of physical similarities.
They also had electrical similarities. All of the outputs had the same drive strengths (except for a few that were designed to be different), they had the same timing performance, the same capacitive loads that they could handle. Similarly with the inputs, they were all consistent to the point that you didn’t need to really think about anything, only that each output could drive four or five inputs.
There were similar ranges of products, such as the 5400 series that had identical functions to their 7400 counterparts, except they were fabricated with ceramic packages and high temperature ranges, necessary for MIL/Aero applications. There was the CMOS range, which gave them a different set of electrical parameters. It was possible to mix them if you were very careful, but I don’t remember the tricks now to get that to work.
Was it efficient to put together a PCB full of quad 2-input NAND gates contained in 14-pin DIP packages? By today’s standards – not even close, but there were no other options. You couldn’t design them yourself and the only other option would have been to do it with transistors, relays or other mechanical apparatus. It was a huge enabler. Over time, more and more complicated devices appeared. Levels of integration increased. These made more efficient usage of the space while maintaining the same physical and electrical socket definitions. By the late 80s, they contained whole processing elements and register banks from which computers could be built.
As semiconductor design became more approachable to a growing number of people, these inefficiencies disappeared. Companies included only the necessary logic functions with no interconnect overhead between them. That was until a new one appeared. So much of each design added little value to a design, but they took increasing amount of time and effort to design. In the 1990s, IP vendors started to appear, trying to sell the commodity parts of a design to larger integrators. In those early days, there were no standards to connect these building blocks, and wrappers became a necessary part of the integration. Over time, the interfaces and protocols became more well defined such that the notions of plug and play became somewhat possible.
Today, the industry is facing a new socket challenge and is somewhat undecided about the approach that should be taken. This is related to 2.5D integration where functionality is split across multiple dies but integrated within the same package using some form of interposer.
One approach was taken by JEDEC, who defined the specification for High Bandwidth Memory (HBM). They took the old PCB socket approach and defined the physical, electrical, and protocol associated with the integration of these memory devices. They took a commodity device and created a much higher performance device that was also very expensive. Initially few companies could afford it until the AI wave happened. Now, with greater adoption, prices are dropping as the technology has been perfected. The manufacturers of these devices have now realized that within the confines of the standard, they do not need to restrict themselves to the bare minimum functionality that was defined. Instead they can create custom devices that conform to the specification, but add increased value.
The second approach looks more like the IP approach where vertically integrated companies are defining and building multiple chiplets that they integrate together. Everything is done internally and they have developed their own protocols to deal with this. Some of these have now become industry standards, such as BoW and UCIe. But having a protocol does not enable a 3rd party chiplet market to exist. A chiplet needs a physical and electrical standard and this problem has been given much less attention. It is a chicken and egg problem – who is going to define it and what value do they gain from it if there are no chiplets that they can then buy? The other side is, who is going to develop chiplets if there is nobody willing to buy them.
At the moment, the systems companies, who are the main drivers for multiple chiplet devices, are happy to develop everything themselves, just as the large semiconductor companies have done for some time. But when will they decide that this is not an economic path forward? Does every company need to develop their own high-speed IO? Does every company need to develop their own processing cluster if this is not the focus of their product?
There are some chinks that might open the possibility of 3rd party chiplet standards. One such example is Google’s Titan. This is a secure processing sub-system that Google developed, similar to ones developed by Qualcomm and Apple. Google has published the full specification for Titan and also made it separable from the rest of their functionality. I am sure they hope that other people will make this as a chiplet that they could buy rather than continuing to invest in it themselves. This would allow cryptology experts to continue to develop this over time.
The systems companies need to work out where they add value and what they should be outsourcing and the sooner they do that, the more they will be able to innovate where others cannot. Perhaps that will come when the finance people start to ask when is AI going to become economically additive rather than just a promise of the future?
Leave a Reply