Artificial Intelligence Chips: Past, Present and Future

It’s been an uneven path leading to the current state of AI, and there’s still a lot of work ahead.

popularity

Artificial Intelligence (AI) is much in the news these days. AI is making medical diagnoses, synthesizing new chemicals, identifying the faces of criminals in a huge crowd, driving cars, and even creating new works of art. Sometimes it seems as if there is nothing that AI cannot do and that we will all soon be out of our jobs, watching the AIs do everything for us.

To understand the origins of the AI technology, this blog chronicles how we got here. It also examines the state of AI chips and what they need to make a real impact on our daily lives by enabling advanced driver assistance systems (ADAS) and autonomous cars. Let’s start at the beginning of AI’s history. As artificial intelligence evolved, it led to more specialized technologies, referred to as Machine Learning, which relied on experiential learning rather than programming to make decisions. Machine learning, in turn, laid the foundations for what became deep learning, which involves layering algorithms in an effort to gain a greater understanding of the data.


Figure 1. AI led to Machine Learning, which became Deep Learning. (Source: Nvidia)

AI’s technology roots
The term “artificial intelligence” was created by scientists John McCarthy, Claude Shannon and Marvin Minsky at the Dartmouth Conference in 1956. At the end of that decade, Arthur Samuel coined the term “machine learning” for a program that could learn from its mistakes, even learning to play a better game of checkers than the person who wrote the program. The optimistic environment of this time of rapid advances in computer technology led researchers to believe that AI would be “solved” in short order. Scientists investigated whether computation based on the function of the human brain could solve real life problems, creating the concept of “neural networks.” In 1970, Marvin Minsky told Life Magazine that in “from three to eight years we will have a machine with the general intelligence of an average human being.”

By the 1980s, AI moved out of the research labs and into commercialization, creating an investment frenzy. When the AI tech bubble eventually burst at the end of the decade, AI moved back into the research world, where scientists continued to develop its potential. Industry watchers called AI a technology ahead of its time, or the technology of tomorrow…forever. A long pause, known as the “AI Winter,” followed before commercial development kicked off once again.


Figure 2. AI Timeline (Source: https://i2.wp.com/sitn.hms.harvard.edu/wp-content/uploads/2017/08/Anyoha-SITN-Figure-2-AI-timeline-2.jpg)

In 1986, Geoffrey Hinton and his colleagues published a milestone paper that described how an algorithm called “back-propagation” could be used to dramatically improve the performance of multi-layer or “deep” neural networks. In 1989, Yann LeCun and other researchers at Bell Labs demonstrated a significant real-world application for the new technology by creating a neural network that could be trained to recognize handwritten ZIP codes. It only took them three days to train the deep learning convolutional neural network (CNN). Fast forward to 2009, Rajat Raina, Anand Madhavan and Andrew Ng at Stanford University published a paper about how modern GPUs far surpassed the computational capabilities of multicore CPUs for deep learning. The AI party was ready to start all over again.

Quest for real AI chips
Why are we hearing so much about AI these days? A convergence of critical factors has set the table for dramatic advances in this technology that many believe will be able to solve more and more significant real-world problems. With the infrastructure offered by the internet today, researchers worldwide have access to the compute power, large scale data and high speed communications necessary to create new algorithms and solutions. For instance, the automotive industry has shown itself willing to spend R&D dollars for AI technology, because machine learning has the potential to be able to handle highly complex tasks like autonomous driving.

One of the key challenges in AI chip design is putting it all together. We are talking here about very large custom systems-on-chip (SoCs) in which deep learning is implemented using many types of hardware accelerators. Designing AI chips can be very difficult stuff, especially given the rigorous safety and reliability demands of the automobile industry, but AI chips are still just chips, perhaps with some new solutions in terms of processing, memory, I/O, and interconnect technologies. Companies like Google and Tesla, which are new to IC design, as well as AI chip upstarts such as AIMotive and Horizon Robotics, bring deep knowledge of the computational complexities of deep learning, but they may face serious challenges developing these state-of-the-art SoCs. Configurable interconnect IP can play a key role in ensuring that all of these new players in the industry will be able to get to functional silicon as quickly as possible.


Figure 3. Anatomy of an AI chip as shown in Google’s Tensor Processing Unit (TPU).

Take, for example, the AI chips with deep-learning accelerators which are being targeted at car front cameras, analyzing visual imagery for roadside object detection and classification. Each AI chip has a unique memory access profile to ensure maximum bandwidth. The data flow in on-chip interconnect must be optimized to ensure wide bandwidth paths when required to meet performance objectives, but allocating narrow paths when possible to optimize area, cost and power consumption. Each connection must also be optimized with the higher-level AI algorithm in mind. To make it even more interesting, new AI algorithms are being developed every day. In some respects, today’s deep learning chips are like bananas, and nobody wants rotten bananas or old algorithms in their AI chips. Time to market is even more important for these leading edge products than many other semiconductors.

The future of AI
While deep learning and neural networks are advancing the state of AI technology rapidly, there are many researchers who believe that there is still a need for fundamentally new and different approaches if the most fantastic goals of AI are to be met. Most AI chips are being designed to implement ever-improving versions of the same ideas published by LeCun and Hinton and others more than a decade past, but there is no reason to expect even exponential progress along this path will lead to AI that can think like a human being. AI as we know it today cannot apply the deep learning about one task that it acquires with such great effort to a new, different task. Also, neural networks do not have a good way of incorporating prior knowledge, or rules like “up vs down” or “children have parents.” Lastly, AI based on neural networks requires huge numbers of examples in order to learn, while a human can learn not to touch a hot stove given only one highly memorable experience. It is not clear how to apply current AI techniques to problems that don’t come with huge labeled datasets.

While AI chips aren’t particularly intelligent by human standards right now, they are definitely clever, and it is very likely that in the near future they are going to get even more clever. These chips will continue to leverage advances in semiconductor processing technology, computer architecture, and SoC design to boost processing power in order to enable the next-generation AI algorithms. At the same time, new AI chips will continue to need advanced memory systems and on-chip interconnect architectures in order to feed new proprietary hardware accelerators with the constant stream of data required for deep learning.



Leave a Reply


(Note: This name will be displayed publicly)