Chip Design Is Getting Squishy

Restrictive design rules no longer apply, and that’s both good and bad.

popularity

So many variables, uncertainties and new approaches are in play today across the chip industry today that previous rules are looking rather dated.

In the past, a handful of large companies or organizations set the rules for the industry and established an industry roadmap. No such roadmap exists today. And while there are efforts underway to create new roadmaps for different industries, interconnects, packaging technologies, that will take time due to many technologies in various stages of development, shifts in end markets, and uncertainties about new architectures and technology approaches. As a result, it’s becoming difficult to define progress, let alone measure it.

In some cases, particularly with chips that incorporate an increasing amount of machine learning, rules seem to change every 12 months or so. Over the past three years the tech world has gone from all computing in the cloud, to everything at the edge, to some combination of end device, edge and cloud. And while chips are still the engine to enable these various compute models, the chips developed for each of these various stages has shifted from GPUs to custom designs in the cloud, to some combination of GPUs, CPUs, eFPGAs, FPGAs, and custom inferencing chips everywhere else.

Three things have changed at a fundamental level:

1. AI/ML/DL have muddied design for the foreseeable future. Identifying and processing patterns, rather than processing individual bits, has opened the door to massive improvements in processing speed. But it also has created an unprecedented level of uncertainty about how chips will behave in context of other systems as they adapt to different use cases, how aging and drift will affect that behavior, and whether inference chips designed today will be obsolete by the time they roll out the door because the training algorithms are being constantly tweaked and updated.

This is reflected in approaches taken by some of the largest chipmakers. Intel and AMD are using modular chiplet approaches, and most of the large foundries and OSATs have announced plans for chiplets. There also is a growing market for built-in programmability, whether that’s an eFPGA or FPGA or regular software/firmware updates. And there is a big and growing market for older-node design, basically choosing the safe route until other technology matures and prices drop sufficiently.

2. End markets for technology are shifting. In the past, markets generally were limited by the technology available to them. In many cases, that was mechanical rather than electronic, where the electronics added some automation or control around the edges. Even a phone was just a phone until well into the smart phone era, and a computer was a computer regardless of whether it was tethered to a cable that ran to a mainframe or minicomputer, or whether it was wireless.

That has shifted in a couple of important ways. Electronics now can be designed for just about any purpose, and that trend will continue as more conformal and flexible sensors are used to monitor mechanical processes. And the electrification of vehicles is opening up massive changes in everything from automotive and trucking to farm equipment. On top of that, designs increasingly are being customized around data, rather than the underlying hardware and software, but both the hardware and software need to be co-designed and co-developed to optimize how that data is processed, stored and utilized. The result is that end markets can be sliced up much more finely than in the past, and solutions can be tailored for exactly what each of those market slices requires.

3. The cost model is changing. All of this design activity costs more money up front, which is a problem if the technology is discarded every couple of years. But if that cost can be amortized over longer periods of time, then the design and manufacturing costs can be justified. The problem is that existing models for some of these applications are based upon some assumptions that so far have not been proven, such as how a complex system will behave under different environmental conditions or use models.

This has given rise to continuous in-circuit monitoring and more in-system sensors. But how all of this technology will work over a decade or two rather than a couple of years is rather sketchy. Can it be recalibrated in real-time to account for circuit aging? Is there is sufficient redundancy in case something goes wrong? And will over-the-air updates be able to limit obsolescence and maintain security long after a product is manufactured?

For an industry that for years had rigid definitions for progress in technology, and restrictive rules for migrating from one node to the next, this is both exhilarating and frightening. The ground underneath the chip industry isn’t exactly shifting. Chips will still be made using conventional approaches, and the rules for design, verification and achieving yield are the same. But how the pieces go together, what materials to use, and what the best approaches are for design chips are looking much fuzzier than in the past. The foundation is still solid, but the context is softening.

Related Stories
5 Major Shifts In Automotive
How new technology developments will change the trajectory of the automotive industry.
Chiplet Momentum Rising
Companies and organizations racing to define interfaces and standards as SoC scaling costs continue to rise.
A New Breed Of Engineer
The industry needs a new breed of engineers that can understand both hardware and software and not just for firmware. Co-design has failed – we need co-engineers.
Revving Up For Edge Computing
New approaches to managing and processing data emerge, along with standard ways to compare them.



Leave a Reply


(Note: This name will be displayed publicly)