Deep Learning And The Future

Why are great strides in artificial intelligence and deep learning happening now?

popularity

Following up from my last post on our deep learning event at the Computer History Museum – “ASICs Unlock Deep Learning Innovation,” I’d like to take a glimpse into the future. Like many such discussions, it’s often useful to take a look back first to try and make sense out of what is to come.  That’s essentlially what our keynote speaker, Ty Garibay, did at the event. Ty is the CTO of Arteris IP. While Arteris IP is one of the many partners we work with in the emerging deep learning market, that wasn’t the reason Ty was presenting.

Ty is a CPU architect and IC design manager who worked at Intel and Altera before joining Arteris IP. It was in that capacity that we asked Ty to comment on deep learning – where it’s been and some thoughts on where it may be going. Ty’s presentation delivered on this agenda and I’ll cover a few of the observations he made. I think Ty may have found the true beginning of AI. In 1726, Jonathan Swift published the rather famous work, Gulliver’s Travels, which included a description of a machine called “The Engine.” Briefly, this was “a Project for improving speculative Knowledge by practical and mechanical Operations.” Anyone want to find an earlier reference to AI?

There’s more on the history of AI in his presentation, but let’s move ahead. Ty discussed the differences between training (learning a new capability from existing data) and inference (applying this capability to new data). These differences are important to understand the various trends in the application of deep learning. If you haven’t dug into these processes, I encourage you to do so.

Back to history for a moment. Ty posed the question “why is deep learning happening now?” A fair question when you consider how long these technologies have been around. He cited five forces at play: raw compute power (referring to semiconductor technology), the availablity of massive training datasets (thanks to the internet), cloud computing (and the massive compute power it makes widely available), new reseach in AI algorithms, and the flow of money from venture investors and major corporations. As an ASIC company, eSilicon is most excited about the first point, but respectful of the importance of all the other points to create the “perfect storm.”

There was a lot of discussion about neural network architectures – a fundamental technology that powers much of deep learning today. Plotting performance against power puts some of these technologies into perspective.

There was a lot more discussion and analysis of how to harness neural nets and what specific semiconductor technologies are required to make it all work. Ty concluded with some thought-provoking observations. I’ll provide a couple here:

  • It is not yet possible for neural networks trained on one problem to transfer or generalize to a similar, but slightly different, problem. Also, it is not known how to feed deep learning algorithms rules or fundamental relationships
  • Deep learning today is a lot like the trolls on Facebook and Twitter, they can’t tell the difference between correlation and causation

If you would like to see the entire keynote, you can access it here: http://www.arteris.com/blog/architecting-the-future-of-deep-learning-

I’ll leave you with one haunting graphic entitled “Objects In the Mirror Are Closer Than They Appear.”



1 comments

Deep Learning Training In Pune says:

Thank you for sharing information about Deep Learning

Leave a Reply


(Note: This name will be displayed publicly)