Arms Race In Chip Performance

Processing speed is now a geopolitical issue, which could help solve one of the thorniest problems in computing.

popularity

An AI arms race is taking shape across continents. While this is perilous on many fronts, it could provide a massive boost for the chip technology—and help to solve a long-simmering problem in computing, as well as lots of lesser ones.

The U.S. government this week announced its AI Initiative, joining an international scramble for the fastest way to do multiply/accumulate and come up with good-enough results. Behind the geopolitcal rhetoric, all are working to process enormous amounts of data at blazing speeds, often with limited power budgets because some of these systems need to operate in the field using batteries. And now they all are competing for the most effective way to process that data in the shortest amount of time.

There are two key pieces in AI. One is the software—algorithms, operating systems, and low-level functions to control the flow of data, regulate heat, and do other on-chip/near-chip housekeeping functions. The challenge there is how to balance precision and speed, while still being able to provide enough accuracy to achieve a given goal. That requires enough training data to make good decisions, and it almost certainly will lead to all sorts of battles over privacy.

The other piece is the hardware, which includes memories, storage, various types of processors, on-chip networks, and a variety of other components and IP. There are two challenges there. One is to keep data moving through a chip at a consistent pace, but to speed up every component of that process while being able to prioritize certain parts as needed. The second is to be able to adapt to changes in algorithms, which will require constant updating, without significantly impacting performance.

This is all understood well enough, even though improvements are needed on all fronts. The next problem to solve is the hardest one, and this is where a breakthrough is long overdue. As with all computing that involves parsing of data, the final results need to be brought together quickly, and in a meaningful way. One of the biggest problems in parallel processing over the years has been combining the output processors into something coherent. This typically is the slowest part of the whole data flow, and the hardest to get right. AI systems will be dealing with more data types and more heterogeneous architectures, which means that data may not be flowing out at consistent rates.

The key is to make sense of that data fast enough, and that requires a better approach to parallel programming and a deeper understanding of the value of volumes of data as it is processed. And all of this needs to be automated and happen at lightning speeds, which is one of the most challenging computer science and hardware-software co-design problems to emerge over the past few decades.

In the past, the resources necessary to solve the problems like these fell on individual companies, or narrow academic efforts such as who could build the fastest supercomputer.  But massive cash infusions from governments to both industry and academia could help propel all of this research along at an accelerated rate. For the tech industry, this could provide a game-changing way forward at a time when Moore’s Law is running out of steam—and spur all sorts of new compute models around technologies such as 5G, in-memory computing, new materials, and new data models based upon patterns rather than individual bits.



Leave a Reply


(Note: This name will be displayed publicly)