An Eye For An AI

Understanding where artificial intelligence will be useful is essential.


AI comes in multiple forms and flavors. The challenge is choosing the right one for the right purpose, and recognizing that just because AI can be applied to a particular process or problem doesn’t mean it should be.

While AI has been billed as a ideal solution for just about every problem, there are three primary requirements for a successful application. First, there needs to be sufficient quantities of good data. Just because data is available doesn’t mean it’s relevant to the problem for which it is being applied. In fact, too much irrelevant data can make it much harder to figure out how to classify that data even with the best tools. But assuming that data is available, the problem becomes how much effort it will take to clean up that data and utilize it for the intended application.

This is harder than it might first appear, because when AI is applied to an AI design, it’s basically looking for patterns across a moving target. Results are typically in the form of a distribution, but there are no firm answers. In software, this is the norm, because software in most cases can be updated and patched. For hardware engineers, that requires an understanding of partitioning because where hardware fails, software needs to make the corrections. And in designs where the software and hardware are much more dependent on each other, it can impact everything from performance and power consumption to overall reliability.

The second requirement is that the problem itself has to be able to benefit from AI. If it takes too long to develop the algorithms or testbenches based on AI models, that won’t necessarily benefit the design. And worse, if the AI model is flawed, it’s difficult to go back and figure out what went wrong because AI is largely opaque. The whole idea behind AI is that it adapts and optimizes, and that creates another variable with unpredictable results.

At the outset, there needs to be a clear benefit to using AI versus more traditional approaches. But Understanding what can benefit from AI, and how much work it will require to achieve those benefits, is a somewhat fluid equation. It has a number of economic variables that can vary by project, by company, and by engineering team expertise. So just because something is new and works well in many places doesn’t mean it works well everywhere or for every application.

The third requirement is that results need to be repeatable and conclusive, meaning they need to be benchmarked against results without AI. This can be a long and tedious process. As chips become more complex, AI looks increasingly attractive as a tool for everything from layout to verification and debug. Finding and understanding patterns in large amounts of data is very useful, and particularly between teams working on different blocks or subsystems. But it takes time to understand the real value of AI, and in semi-customized chips that are sold in quantities of hundreds of thousands or millions in a tight market window that isn’t always an option.

It’s one thing to build a chip that will go into 1 billion smart phones with AI, or even across chips based on a specific process or platform. It’s quite another to apply AI in a one-off chip aimed at a very specific application in relatively low volumes.

Proving the value of AI and its various iterations isn’t in question. It’s the application of various flavors of AI in specific designs. Proponents of this technology need to prove that it’s at least as good at a very specific task or set of tasks as the best engineering teams, and even then it needs to be overseen by people with deep domain-specific knowledge. AI can be an effective tool in the right hands and for the right tasks, and potentially even find bugs in corner cases no one had expected, but not always. The challenge is knowing where and when to apply it, and so far that’s still more art than science.

AI channel on YouTube
AI Knowledge Center
The Murky World Of AI Benchmarks
What works for one application may be wholly inadequate for another; accuracy may vary by use case.
HBM Issues In AI Systems
Speeding up memory reveals new challenges, especially when memory is part of the package.

Leave a Reply

(Note: This name will be displayed publicly)