Knowledge Center
Navigation
Knowledge Center

Machine Learning (ML)

An approach in which machines are trained to favor basic behaviors and outcomes rather than explicitly programmed to do certain tasks. That results in optimization of both hardware and software to achieve a predictable range of results.
popularity

Description

Machine learning comes in many flavors, and often means different things to different people. In general, the idea is that algorithms can be used to change the functionality of a system to either improve performance, lower power, or simply to update it with new use cases. That learning can be applied to software, firmware, an IP block, a full SoC, or an integrated device with multiple SoCs.

Machine learning, like deep learning, involves two separate phases. The first is the training phase, which involves fine-tuning an algorithm to produce the desired range of results. The second involves inferencing, where that training data is used to provide an acceptable range of reactions to stimuli. For example, once a machine is trained to identify a person and the direction and speed at which they are moving, it can determine if an electronic system such as an autonomous vehicle will hit them. The vehicle can then either stop and wait or move around the person, depending upon the options available in the training algorithm.

Machine learning can be used to optimize hardware and software in everything from IP to complex systems, based upon a knowledge base of what works best for which conditions. The approach assures a certain level of results, no matter how many possibilities are involved. That approach also can help if there are abnormalities that do not fit into a pattern because machine learning systems can ignore those aberrations.

The idea that machines can be taught dates back almost two decades before the introduction of Moore’s Law. Work in this area began in the late 1940s, based on early computer work in identifying patterns in data and then making predictions from that data.

Machine learning applies to a wide spectrum of applications. At the lowest level are mundane tasks such as spam filtering. But machine learning also includes more complex programming of known use cases in a variety of industrial applications, as well as highly sophisticated image recognition systems that can distinguish between one object and another.

Arthur Samuel, one of the pioneers in machine learning, began experimenting with the possibility of making machines learn from experience back in the late 1940s-creating devices that can do things beyond what they were explicitly programmed to do. His best-known work was a checkers game program, which he developed while working at IBM. It is widely credited as the first implementation of machine learning.

Machine learning has advanced significantly since then. Checkers has been supplanted by more difficult games such as chess, Jeopardy, and Go. Recently, it can been successfully used to win complex strategy computer games with huge state spaces such as Starcraft.


Multimedia

MCU Changes At The Edge

Multimedia

The Impact Of Machine Learning On Chip Design

Multimedia

Bridging Math & Engineering In ML

Multimedia

Using ML To Break Down Silos

Multimedia

Making Sense Of ML Metrics

Multimedia

ML Inferencing At The Edge

Multimedia

Real Applications For ML In Semi Design

Multimedia

How To Improve ML Power/Performance

Multimedia

Neural Networks (2017)

Multimedia

Energy-Efficient AI

Multimedia

AI, ML Chip Choices


Related People


Related Entities


Related Technologies