Making Machine Learning Portable

The semiconductor world is headed for some major and rapid changes.

popularity

Machine learning is everywhere, and it has exploded at a pace no one would have expected. Even a year ago, ML was more of an experiment than a reality.

NVIDIA’s stock price (Fig. 1, below) is a good representation of just how quickly this market has grown. GPUs are the chip of choice for training machine learning systems.


Fig. 1: Nvidia 5-year stock price. Source: Google Finance

Machine learning at the start of 2016 was largely a science experiment. Now it is being applied to improving a vast number of products, and it is starting to creep into the consumer world.

The first rev of this is the result of a confluence of massive compute power, bandwidth, cheap memory, and at least workable algorithms. Those algorithms can be used to train these processes, which is largely an iterative data mining operation. When a set of parameters is established, data is refined over and over again, pinpointing what went wrong in manufacturing or design and where there is redundancy. This can provide improvements in operational efficiency astoundingly quickly, which can be used to boost product quality, reliability, performance, power consumption and yield.

The other piece of machine learning is inferencing, which is basically refining the algorithms so that machines behave within a set of parameters. This is essentially establishing accepted behavior for devices or applications based upon known, expected or even unknown factors. When the unknowns are encountered, machines still can only behave within a set of defined limits. But those unknowns are then added into the training/inferencing databases so they can be included in future decisions, or ignored completely if they are not relevant.

As you would expect, all of this requires a massive amount of compute power. This is why NVIDIA’s stock price is so high. GPUs are cheap and they are amenable to massive parallelism. This has been particularly important on the training side, which relies heavily on floating point calculations. Now the race is starting for the inferencing side, which relies on fixed point calculations. Converting from one to the other is inefficient, which is why the FPGA and DSP players are sharpening their machine learning skills. Both FPGAs and DSPs are fixed-point processors.

From all indications, this will turn a huge competitive battle with broad implications for the semiconductor industry. Unlike the training piece, which is largely confined to data centers, inferencing will be done both centrally in data centers as well as locally. It will make its way into mobile devices of all sorts as the algorithms are continually refined and pruned and the rules are set for how devices can behave. That means it will be used inside of billions of devices for tens of billions of applications.

This bodes well for the semiconductor industry, at least for the foreseeable future. There will be massive demand for extremely fast processing, huge amounts of memory and storage, and extremely fast on-chip interconnects. There also will be huge demand for infrastructure that can handle a surge in data being transferred in and out of these devices so that applications can interact with the real world and with other devices. And there will be a big bump in the tools that enable these devices to be built, verified, and debugged much more quickly and effectively.

If NVIDIA’s stock price is any indication, the future is about to change rather quickly—and most likely in unexpected ways.



Leave a Reply


(Note: This name will be displayed publicly)