Getting Realistic About AI

When can AI methods be used effectively from the user perspective?

popularity

By Olaf Enge-Rosenblatt and Andy Heinig

The topic of artificial intelligence (AI) is omnipresent today, both in the news and on popular science shows. The number of possibilities for AI methods to assist people in making decisions are expanding rapidly.

There are three main reasons for this:

  1. The development of new AI methods (deep learning, reinforcement learning);
  2. The continuous improvement of hardware capabilities, and
  3. The growing availability of extensive and well-structured training data.

These factors allow AI to keep expanding into new areas of application. But how do things look from the perspective of an AI user? When can AI methods be used effectively? What are the prerequisites and what is the right way to proceed? These and other questions are of interest to anyone carefully studying their area of responsibility for possible AI use cases.

Experiences from various projects for introducing AI to industrial production processes have shown that five initial challenges exist from the user’s perspective, which must be addressed in order: the business case, team building, the algorithm, implementation and, finally, cross-company collaboration. However, it is sometimes impossible to consider these challenges separately because they each influences each other. This makes it necessary to iterate through all of the challenges a number of times.

The very first and most important question for the introduction of AI is how to define the business case. It must be shown that the expected investment will pay off. Otherwise, it must be determined whether the planned AI is of strategic importance to the company.

The topic of team building has two parts — the core team and the involved company departments. The core team should always include the data analysts (this is AI, after all), as well as process experts and software experts. This is necessary due to the way in which AI is developed. Other important company areas include IT, marketing, and the corresponding management level.

With regard to the algorithm, it must be determined how effectively AI can be expected to solve the user’s problem (feasibility), and whether sufficient domain and process knowledge exists at the company for training an AI and evaluating the results. The available data plays the largest role in defining the algorithm. Cross-company collaboration is required to ensure that any additionally necessary data can be obtained from suppliers or customers.

When it comes to AI, everyone imagines large computer systems, neural networks, and intelligent autonomous systems. However, AI can be used with many other kinds of systems, as well. Examples include applications in department stores (e.g. personalized advertising or product placement after previous purchases) or in the area of social media (to recommend additional content).

In areas where real data must be collected, there is a strong trend toward distributed AI operating partially in the cloud, partially on an edge system, and possibly (this is where it gets really interesting) on an individual chip. The amount of data required for communication can be optimized in this way. It also allows time-critical decisions to be made “early” in the hardware chain, while the more time-consuming and complex data analysis generally takes place in the cloud.

From the user’s perspective, it doesn’t matter much whether the AI runs in the cloud or on an edge system. However, the specific circumstances are generally a better fit for one option or the other. If low latency is essential, there are significant advantages to systems where large parts of the AI run on an edge system. Edge solutions are also preferable with regard to data sovereignty. On the other hand, large processing demands can be handled better in the cloud, especially when it comes to distributing the processes among the cloud servers based on priority and processing capacity.

This optimization must be considered during definition of the solution to ensure the user receives the best possible performance. At first, users generally have little or no interest in the specific implementation details of the AI solution as long as it provides effective assistance in making decisions or performing other tasks. However, as data volumes grow ever larger over time and the decisions must be made even more quickly, the need to optimize for individual performance will only increase.

Andy Heinig is group leader for advanced systems integration and department head for efficient electronics at Fraunhofer IIS EAS.



Leave a Reply


(Note: This name will be displayed publicly)