provigil a stimulant, provigil medication assistance program, can provigil cause adrenal fatigue, provigil ritalin combo, is nuvigil stronger than provigil, provigil and tachycardia

Deep Learning Spreads

Better tools, more compute power, and more efficient algorithms are pushing this technology into the mainstream.

popularity

Deep learning is gaining traction across a broad swath of applications, providing more nuanced and complex behavior than machine learning offers today.

Those attributes are particularly important for safety-critical devices, such as assisted or autonomous vehicles, as well as for natural language processing where a machine can recognize the intent of words based upon the context of a conversation.

Like AI and machine learning, deep learning has been kicking around in research for decades. What’s changing is that it is being added into many types of chips, from data centers to simple microcontrollers. And as algorithms become more efficient for both training and inferencing, this part of the machine learning/AI continuum is beginning to show up across a wide spectrum of use models, some for very narrow applications and some for much broader contextual decisions.

“Some of this is in anticipation of what will be required in chips for autonomous vehicles,” said Chris Rowen, CEO of Babblabs. “Techniques with neural networks are working, and the cost in terms of power is pretty modest. So it makes sense to add deep learning subsystems. We’re seeing a lot of startups coming into the market. There are 25 in the deep learning space alone. Some are oriented toward the cloud, where they hope to knock out Nvidia. But some are at the low end of the market, too. In both cases, there is a recognition of a very specialized idiom of computation for matrix multiply that is highly structured, but where you need low- to medium-bit precision that’s as fast as possible. And there is an almost unlimited appetite for compute power.”

That bodes well for a variety of tools, chips, software and expertise to make this all work.

“There is a lot of pattern recognition in the automotive market,” said Wally Rhines, president and CEO of Mentor, a Siemens Business. “You can see this in the data on growth in demand for data centers, too. There are dozens of companies doing special-purpose processors for adaptive learning.”

Deep learning represents a growing slice of all of these markets, and many others. “It’s already being used for facial recognition in the iPhoneX and for natural language processing, and it’s being designed in for autonomous vehicles and to perform tasks such as recognizing whether an object is a dog or a cat,” said Mike Gianfagna, vice president of marketing at eSilicon, which currently is developing deep learning chips. “A deep learning chip, on the surface, looks a lot like a data center chip. It may use an HBM memory stack to store training data and get that to the chip quickly, and it probably uses custom memories. It’s easier to implement than a networking chip, though. A networking chip may require 50 to 90 blocks, but they’re all different. With deep learning, there is a large number of blocks, but there’s a lot of repetition of the same blocks. So it’s easier for place-and-route. The blocks are more systolic and well-behaved.”

Looked at from a commercial standpoint, deep learning, machine learning and AI could achieve economies of scale that continue to drive PC, smart phone/tablet and cloud sales.

“In the past, a few billion multiplies per second was considered a lot of capacity,” said Rowen. “But now even edge devices can do hundreds of millions of multiplies. And if you have trillions, they can be put to good use. You see this with the latest generation of chips from Nvidia. Instead of 10 teraflops per second, they’re now at 100 teraflops per second. But these also are becoming more specialized, less general-purpose designs. There are extremely compact ways to deliver computation on this scale.”


Fig. 1: Comparing deep learning with machine learning. Source: XenonStack

What is deep learning?
Despite all of this, the definition of deep learning remains a little fuzzy. It sits under the artificial intelligence umbrella, either alongside machine learning or as a subset. The difference is that machine learning uses algorithms developed for specific tasks. Deep learning is more of a data representation based upon multiple layers of a matrix, where each layer uses output from the previous layer as input. The approach more closely mimic the activity of the human brain, which can tell not only that a baseball is in motion, but approximately where it will land.

Yet behind all of this there is no consensus about exactly how deep learning works, particularly as it moves from training to inferencing. Deep learning is more of mathematical distribution for complex behavior. To achieve that representation, and to shape it, there are a number of architectures being utilized. Deep neural networks and convolutional neural networks are the most common. Recurrent neural networks are being used, as well, which add the dimension of time. The downside of RNNs is the immense amount of processing, memory and storage requirements, which limits its use to large data centers.

Geoffrey Hinton, a British computer scientist and cognitive psychologist, also has been pushing the idea of capsule networks, which stack up layers inside of layers of a neural network, basically increasing the density of those layers. The result, according to Hinton, is much better because it recognizes highly overlapping digits. And this is one of the themes that runs through much of the research today-how to speed up this whole process.

The problem is so complex that it is well beyond the capability of the human brain to figure out everything, so all of this has to be modeled or theorized. For chipmakers, this is nothing new. Ever since chips reached the 1 micron process node, it became hard to visualize all of the different pieces in a design. But in computer science, many advancements have been largely two dimensional. Rotating or tilting objects is much more difficult to represent mathematically, and that requires a lot of compute resources. In the interest of speed and efficiency, researchers are trying to figure out ways to prune those computations. Still, it’s a big challenge, and one that is largely opaque to deep learning experts.

“Deep learning, in part, is a black box,” said Michael Schuldenfrei, CTO at Optimal+. “It’s difficult to understand the actual decision making. You can explain the model that a machine learning algorithm came from. We do a lot of work comparing different algorithms. The deep learning algorithm is more complicated than machine learning algorithms. But one thing we have found across all of them is the answer derived from these algorithms may be different across different products. So on Product A, Random Forest works well. On Product B, another algorithm or combination of algorithms works better.”

Deep learning algorithms date back to the late 1980s. “A lot of this work started with the U.S. Post Office, which needed to recognize hand-written digits,” said David White, distinguished engineer at Cadence. “They realized they needed a way to decrease the size of the input space, so they used additional layers to extract features. Since then, there have been a lot of advances on deep learning algorithms.”

Deep learning algorithms typically need a lot more computational horsepower than machine learning. “The architecture used in deep learning is specific,” said White. “It uses convolution, pooling, and specific activation functions. Some of the techniques are similar to machine learning, some are different.”

Not everything benefits from this approach, though.

“The nature of more parameters is that you can model very complex behavior,” said Babblabs’ Rowen. “But you have to train it with a corresponding data set. If you take a simple problem, a deep neural network is not more accurate. Statistical modeling can only go so far. Humans can learn a whole bunch of things about objects and how they move when you rotate them. That’s built into human brains with fewer examples than current machine learning or deep learning examples. Today, a machine has no notion of when you rotate an object. It doesn’t know perspective. It only can learn from enough millions of examples.”

DL in more markets
While the demarcation points for deep learning and machine learning aren’t always clear, the applications of these different slices are coming into focus.

“Deep learning for embedded vision is well defined,” said Gordon Cooper, product manager for Synopsys‘ Embedded Vision Processor group. “We’re also seeing it being used for radar and audio, where you apply a CNN algorithm to it. And we’re seeing a lot of requests regarding IT processes and applying deep learning and AI to those.”

What makes this technology increasingly accessible is the ready availability of technology building blocks, including off-the-shelf algorithms, and a variety of off-the-shelf and custom-built and semi-custom processors, accelerators, and cheap memory. “With RNNs, you’re looking at long short-term memory (LSTM),” said Cooper. “If power isn’t an issue, you can use a GPU. There are also embedded chips where you get less performance and focus instead on power and area. Bandwidth remains a big issue-particularly how you get data to and from DRAM-so inside the chip there are memory management techniques and multiply/accumulate.”


Fig. 2: Growing uses for deep learning. Source: Semiconductor Engineering

Chip companies already are getting some firsthand experience with the technology by using it internally, as well.

“We use machine learning today to manage our compute farms for I/O throughput, which is pretty challenging,” said eSilicon’s Gianfagna. “We track all of the CPUs and calibrate them against the jobs and create predictive loads. It’s basically predictive load balancing, and much of that is done in software. Cloud companies like Google and AWS (Amazon Web Services) use deep learning for their workflow and load balancing, and they use hardware for that. Deep learning potentially is the most silicon-intensive piece of those operations.”

One of the newer applications of this technology involves robotics. “The key there is that these devices need to continuously learn because the task a robot does changes and different environments can change,” said Cadence’s White. “So if you’re manufacturing in the Philippines versus Europe, the software may have to adapt. It’s the same for a lot of the IoT. Sensors go into systems in very different environmental conditions. This requires adaptive systems, which will be the next big wave for machine learning, deep learning and AI. For a gas sensor monitoring different wavelengths, as the sensor degrades the signals change. So the question is whether a system can adapt to that kind of change and still do the job. You don’t want to shut a system down every time you get condensation on a camera lens.”

Deep learning is showing up in mobile phones, as well. Apple’s iPhone X uses deep learning for facial recognition. “You also can use it to improve the picture on a mobile device, applying filters based on deep neural network techniques,” said Synopsys’ Cooper. “But each of these markets has its own needs. So for cancer detection, the challenge is getting enough data points. Tens of thousands isn’t enough. The reverse is true in automotive, where you have hours of video, but what do you do with all of that data? The key there is how you use technology to find a section of the video that’s important.”

Conclusion
The semiconductor industry is just beginning to scratch the surface of how deep learning can be applied effectively, and to understand where it does and does not add value. From there it will be a scramble to figure out how to do all of the necessary training and inferencing more quickly using the least amount of power.

In the past, this technology was closely linked with mainframe computers and supercomputers. But as it gets parsed into smaller pieces for more limited applications, deep learning will have a much bigger impact across a growing swath of markets. The run up of this market is just beginning.

Related Stories
The Next Phase Of Machine Learning
Chipmakers turn to inferencing as the next big opportunity for this technology.
The Darker Side Of Machine Learning
Machine learning needs techniques to prevent adversarial use, along with better data protection and management.
Babblabs: Deep Learning Speech Processing
Startup to apply deep learning technology to advanced speech processing.



Leave a Reply


(Note: This name will be displayed publicly)