From AI to Moore’s Law, the entire industry is deep in the throes of massive changes.
The entire tech industry has changed in several fundamental ways over the past year due to the massive growth in data. Individually, those changes are significant. Taken together, those changes will have a massive impact on the chip industry for the foreseeable future.
The obvious shift is the infusion of AI (and its subcategories, machine learning and deep learning) into different markets. Chips are now being built specifically to accelerate AI algorithms. In addition, AI is being used to improve and streamline the design and development of those chips and other technologies.
Looked at from a different angle, AI is largely an outgrowth of an explosion in data. There is more data being collected than ever before. While the growth in sensors continues at a fairly steady pace, so does the amount of data being generated by each of those sensors. As the volume of data goes up, so does the race to clean it, mine it and sort it more quickly through pattern recognition. AI is a means of both managing and utilizing that data more effectively.
The challenge now is how to make these algorithms more efficient, which is why there is a scramble to accelerate these algorithms to speeds of 100X or greater, using such techniques as lower accuracy and heterogeneous compute elements on a die to both speed up existing algorithms and to create new and better algorithms.
Fig. 1: CMOS sensor sales growth. Source: IC Insights
Alongside all of this, a revolution is underway in chip design for entirely different reasons. Device scaling, in accordance Moore’s Law, is facing some serious technical challenges. While it is technically possible to reach 1nm or so, and maybe even beyond that, the power/performance benefits are dwindling at each new node and the cost is going up non-linearly.
GlobalFoundries’ decision to scrap its 7nm plans is the latest proof point that this approach is no longer sustainable for the entire industry, and over the next couple of nodes it may prove to be non-viable for anyone. Fortunately there are other alternatives, such as using different materials, new packaging approaches, and different architectural options ranging from doing more per cycle in software—basically increasing the density of the data rather than transistors—and new types of memory that can read data in multiple directions instead of just left to right.
Underlying all of this churn, the value of data is increasing. As more data becomes available, and as more of the physical world is digitized, all sorts of trends can be extracted from that data. Based on driving habits (real-time traffic), travel schedules, size of homes (as mapped by your robotic vacuum cleaner or sprinkler system or Google Maps drive-by photos) or your online purchase history, a vendor can determine what to market, when is the best time, and which family member, friend or pet you’re buying for.
This almost certainly will have huge repercussions in terms of privacy and security, and it will create persistent political fallout. But it also will provide opportunities for hardware innovation the likes of which haven’t been seen since the introduction of the PC. Data needs to be cleaned and accelerated using massive processing power at the lowest power possible, and it needs to happen everywhere. But it’s no longer a two-horse race between Apple and Samsung. Processing needs to happen everywhere because access is expected to be instantaneous and ubiquitous.
Next year will mark the starting point of the race to the edge. Companies spent the better part of 2018 trying to digest all of these changes, experimenting with new chip architectures integrated with advances in computer science. But in 2019 the battle will begin in earnest, and that should spark some interesting fireworks.
What are your thoughts on oxygen dopping the chip industry as one of those alternative materials to improve performance and preserve battery life mentioned in your article?