How Far Can AI Go?

Current implementations have just scratched the surface of what this technology can do, and that creates its own set of issues.

popularity

AI is everywhere. There are AI/ML chips, and AI is being used to design and manufacture chips.

On the AI/ML chip side, large systems companies and startups are striving for orders of magnitude improvements in performance. To achieve that, design teams are adding in everything from CPUs, GPUs, TPUs, DSPs, as well as small FPGAs and eFPGAs. They also are using small memories that can be read in multiple directions by the processors, as well as larger in-chip memories and high-speed connections to off-chip HBM2 or GDDR6.

The driving force behind these chips is being able to process massive amounts of data much more rapidly than in the past—in some cases, two or three orders of magnitude. That requires massive data throughput, and these chips are being architected so there are no bottlenecks in throughput or processing. The biggest challenge, so far, is keeping these processing elements busy enough, because idle processing elements costs money. This is easier with training data than it is with inferencing, but that may change as more of the inferencing is done across various slices on the edge.

There is a whole other side of AI, as well, where the challenge is finding and reacting to patterns in data, as well as setting parameters of acceptable behaviors. This is the area that ultimately will have the biggest impact on the world in which we live, but so far researchers only have scratched the surface. The big question is what else can be done with this technology, and that has a direct bearing on the demand for those highly advanced AI architectures, including where the processing is done and how much power it can use.

AI, in many respects, is the next version of device scaling. Rather than just scaling hardware features, though, it leverages big improvements in the software algorithms and networking, as well. So instead of a standard von Neumann compute architecture and a software stack with a mix of optimizations, the hardware is matched to the software, which is seeking out and reacting to patterns in data rather than trying to make sense of individual bits. Put in perspective, mainstream computer architectures are largely built on improvements in the old punch-card model, which dates back nearly 130 years. AI takes that model well outside the physical limits of a device by using pattern recognition and neural networking.

AI already is in use to some extent in smart phones with facial recognition, and it will be required in automotive applications for identifying and classifying objects. But it can extend much further. For semiconductors, pattern recognition can serve as a bridge between design, manufacturing, and even post-production data. Being able to identify acceptable distributions of behavior inside of chips or end devices can be extremely useful for predictive analytics and self-healing systems. It’s not a stretch, for example, to design an AI system that could manage data traffic flow through a system, and reroute that traffic when portions of the system are damaged or no longer functioning properly.

What isn’t clear yet, though, is how much overhead all of this would require in terms of power, performance and design cost. The first part of the AI race has been to get fast chips and systems up and running. The next part will be to see just how far it can be extended economically and what the impact will be on system design, how much of an impact less precision will have on the final result, and how much all of this will cost.

AI is a giant knob to turn in the tech world with all sorts of possibilities. The next step is to figure out what’s realistic from a market and cost perspective, and to start developing a roadmap to give some structure to this development effort—before the massive VC investments begin to dwindle.



1 comments

Gil Russell says:

Great article Ed,

What might that next technology be? Could this story be a build up to the launch of something called Hyperdimensional Computing? I recommend taking a look at the Journal Science – Robotics article by the University of Maryland team or for that fact what’s under the hood at Vicarious AI. Also Justin Wong’s Ph.D dissertation from the Cal Berkeley is suggested reading if you like Negative Capacitance Gate Transistors combined with HDC.
HDC is finally waking up from its hibernation to explode on the scene, so prepare to be extremely busy as this seed of cognitive computing envelops us all…,

Leave a Reply


(Note: This name will be displayed publicly)