What’s Next In Neural Networking?

Technology begins to twist in different directions and for different markets.

popularity

Faster chips, more affordable storage, and open libraries are giving neural network new momentum, and companies are now in the process of figuring out how to optimize it across a variety of markets.

The roots of neural networking stretch back to the late 1940s with Claude Shannon’s Information Theory, but until several years ago this technology made relatively slow progress. The rush toward autonomous vehicles — which relies on neural networking to collect data from many sensors — changed all of that. Work is underway by established companies, startups, and universities around the globe, and funding is pouring into neural networking, as well as related markets such as embedded vision, machine learning, and artificial intelligence.

“Mass market economics, increased processing power and improving computational vision techniques equals opportunities for new mass markets to be created,” said Tim Ramsdale, general manager of the Imaging and Vision Group at ARM. “But all of this has to be done in real time. Latency minimization is critical. Having lights turn on as soon as you appear at the door is critical. That means a minimum of 30 frames per second, and preferably 60 frames per second. To do that you have to have processing at the edge, and processing at the edge means low power.”

At the heart of this technology are several basic components: algorithms, very fast hardware, and optimization technology and methodologies to make these algorithms work faster using low power. The challenge is lowering the amount of computation that is required by training these systems to be more efficient.


Fig. 1: A four-layer network. Source: Michael A. Nielsen, “Neural Networks and Deep Learning,” Determination Press, 2015.

“The key stage for neural networks is training the models, and since the training process is constantly improved, the challenge is to update trained models frequently at the target hardware and provide enough power to the algorithm execution,” said Zibi Zalewski, general manager of the hardware division at Aldec.

FPGAs can be a great solution for both problems, he said, because re-programmability allows for easy update of trained models in the hardware, while programmable logic provides the space and parallel architecture to accelerate the algorithms. “Since deep learning-based technologies appear now in commercial, industrial and safety-critical markets, the trick is to provide efficient process to update those new improvements to implementation of algorithms. This is where science meets engineering. Therefore, technologies like high-level synthesis grow to allow for a fast conversion from C++ implementation to HDL code. The challenges, of course, are to achieve the most optimal results for the hardware platform, or to pick the proper one for the algorithm.”

DSPs are starting to show up on the embedded vision side, as well. Like FPGAs, they can be programmed. But unlike FPGAs, which use floating point math, a vector DSP does computation using fixed point.

“Dynamic fixed point can take care of multiple issues,” said Tom Michiels, system architect at Synopsys. He noted that the biggest problem in running these algorithms is low power, and the best way to achieve that is by changing the order of the convolutional vectorizations. “If you organize the loops, you can maximize data use and therefore reduce the bandwidth.”

Work zone ahead
It’s not clear which processing architecture — CPU, GPU, FPGA or DSP — ultimately will be proven the winner. So far, the jury is out as to what works best and for what markets. But there is plenty of work going on behind the scenes to improve these systems.

This is about taking smarts to the next level, said Anush Mohandass, vice president of marketing and business development at NetSpeed Systems. “We are really burning the grass on getting machine learning to SoC architecture and SoC design — that’s essentially automation on steroids. Anything that’s a multi-variant problem is going to be really tough for any human or a single-dimensional machine to compute. You bring in something like machine learning or a self-learning algorithm and it’s going to perform so much better.”

Market economics support this, as well. “It’s at a place where right now if you want to do something like machine learning the concepts were there but now you don’t need to reinvent all the algorithms — you can get those algorithms from Google, you can run your servers on an AWS (Amazon Web Services) server,” he said. “What it needed in the ’80s and ’90s was a lot more investment, so the bar was really high. At that time, the hottest area for neural networks was for the stock market — neural networks predict stock markets better. But the guys who actually took it up are the guys who had the millions and billions of dollars to invest in such a technology.”

Much has changed since then. Computing technology is now cheap enough, either for purchase or leased as a service, to support this model. “It’s a confluence of cheaper compute, more affordable storage and the fact that there are some open libraries so we don’t need to re-invent the wheel and develop things,” Mohandass said.

The tricky part for each company knowing when to develop the algorithms internally, or use what is easily available commercially, which actually can come down to company culture. “Facebook will take whatever is there, keep improving it, and not try to reinvent the wheel,” he said. “Then there are companies like IBM, where their core being is to be innovative. They pride themselves on coming up with something very new and very innovative. It’s more of a culture thing. Anybody who’s not in that billions-of-dollars category has no choice but to re-use because everybody else is doing that, and the cost of not doing it is high.”

Dealing with noise
Further, developing a neural network is a substantial challenge, and many decisions must be made up front, including how and when the data is processed.

Matthias Pollach, machine learning and AI engineer at Mentor, a Siemens Business, explained that in the design of its autonomous driving platform, Mentor’s team first looked at what other engineering teams were doing. “Having very powerful deep neural networks is what a lot of people do right now, but we looked at it and thought this is not really ideal because the sensors — especially at the level they consume data, and fuse it — were not ideal from an information technology perspective. There are object lists or target lists, everything was pre-processed, and that is exactly what we did not want to do, because the biggest goal was to have very low latency while having the maximum level of confidence.”

He explained that looking at a radar sensor, for example, the data obtained is usually very noisy. There are a lot of road reflections, or what are referred to as ghost targets. “You see something in the sensor that isn’t really a target. For that reason, depending on which sensor being used — on average, there are a minimum of five frames the radar sensor has to see of an object or target before it actually appears in the object list to the confirmed target — you introduce latency, and that’s what we didn’t want to do.”

The team realized the best route was raw data processing, even if the data rate was higher, because it could be accelerated with hardware. “The benefit we get out of it is the sensor fusion, and how we pre-process the data is enabling a better way of handling or dealing with the amount of data,” Pollach said. “In a simple example, let’s say you are approaching an intersection, and the system indicates there is something in front of you. A very rough classification says it could be a pedestrian and tells you roughly where it is. The system then provides the unfiltered information, meaning the pixels, the LiDAR points, and also the raw radar data. Nothing is discarded. What this means is that the entire image doesn’t have to be analyzed like other companies do with deep learning. Just a snippet of the image, the LiDAR, and radar data that can be associated with it are fed to the network. This means a whole bunch of data doesn’t need to be analyzed. You can choose where you want to spend your resources. One of the big things is giving hints and telling the neural network it doesn’t have to look at everything in detail. It just has to look at what is really relevant for the driving mission right now.”

Thinking differently
In other words, researchers are starting to look at problems differently.

“If people designed digital communication systems without the presence of Shannon theory, would people have deployed it? You bet they would have, because the reason people deploy things is not because they necessarily understand it,” said Samer Hijazi, Deep Learning Group director and senior architect at Cadence. “People use things because they need it, and it works well enough. The reason people did not deploy artificial intelligence in any mainstream application in the past was not because they did not understand the theory that makes it work or how it could be made better. It was because it was not good enough for deployment. By the same token, there was no hardware capable of deploying it, and that’s why it was not being used.”

However, for certain applications machine learning is now good enough. “Specifically, deep learning has shown to be good enough for some applications. The ‘good enough’ threshold and lots of applications means ‘as good as humans can do or better.’ Given that this threshold has been passed consistently in several applications, deep learning will be with us for a long time. Is it going to evolve and incarnate itself farther? Absolutely, and it will be with us for a long time to come. The caveat to that is someone smart, someone who is the theory behind it and can derive a much lower-cost solution—achieving the same performance for the application where machine learning or deep learning is being used today—that will phase out deep learning as we know it today and replace it with something else. Short of that, deep learning will be deployed because it is good enough.”

Betting on the future
At the end of the day, what’s really driving neural network programming concepts is all about data, and what is done with that data.

“There is a dirty little secret about all of this,” pointed out Kurt Shuler, vice president of marketing at ArterisIP. “Everyone is targeting primarily automotive with this, because people will pay for that. The reason why the semiconductor guys, the Googles, and everybody else is getting involved in this is not only because of the car market, because you’ll pay for that today. It’s because the same technology is going to go into the advertising when you walk into the shopping mall like in [the movie] ‘Minority Report.’ ‘Would you like to buy some toothpaste to make your teeth whiter?’ The stuff is going to go everywhere, so for certain companies, if they don’t do this, they’re locked out of not just cars. They’re locked out of the future.”

All indications point to neural networking sticking around for quite some time. Which specific techniques bubble to the surface as the ubiquitous approaches will continue to be worked out in academia and industry.

Related Stories
Speeding Up Neural Networks
Adding more dimensions creates more data, all of which needs to be processed using new architectural approaches.
Convolutional Neural Networks Power Ahead
Adoption of this machine learning approach grows for image recognition; other applications require power and performance improvements.
The Great Machine Learning Race
Chip industry repositions as technology begins to take shape; no clear winners yet.
Inside AI And Deep Learning
What’s happening in AI and can today’s hardware keep up?
Inside Neuromorphic Computing
General Vision’s chief executive talks about why there is such renewed interest in this technology and how it will be used in the future.
Neuromorphic Chip Biz Heats Up
Old concept gets new attention as device scaling becomes more difficult.
Five Questions: Jeff Bier
Embedded Vision Alliance’s founder and president of BDTI talks about the creation of the alliance and the emergence of neural networks.



Leave a Reply


(Note: This name will be displayed publicly)