Systems & Design
SPONSOR BLOG

Deep Learning Market Forces

Managing the massive amounts of data generated today won’t come cheap.

popularity

Last week, eSilicon participated in a deep learning event at the Computer History Museum – “ASICs Unlock Deep Learning Innovation.” Along with Samsung, Amkor Technology and Northwest Logic, we explored how our respective companies form an ecosystem to develop deep learning chips for the next generation of applications. We also had a keynote presentation on deep learning from Ty Garibay, CTO of Arteris IP.

Over 100 people joined us for an afternoon and evening of deep learning exploration and some good food, wine and beer as well. The audience spanned chip companies, major OEMs, emerging deep learning startups and research folks from both a hardware and data science/algorithm point of view. We covered a lot of ground at this event, far more than can be accomodated in one post. So, this is the first in a series of discussions about the key obervations and take-aways. I’ll begin with market forces.

Every speaker presented a view of the macro forces that drive the need for deep learning. Data explosion was a big one. By 2020, an additional one billion plus consumers online, over 30 billion connected devices, around 200 Exabytes (1018 bytes) of data traffic per month. You get the picture. All this data creates the need for intelligence in the cloud to manage and optimize the storage and analysis. The slide above summarizes a deployment model.

This trend touches everything from the edge to the cloud. The types of technology that can serve the needs of deep learning algorithms were also discussed. CPUs, GPUs, FPGAs and ASICs are all in the mix. What’s interesting are three things regarding chip technology:

  • Deep learning demands extreme performance – this is the only way to deploy things like neural nets in a practical way.
  • ASICs offer the best power and performance of all the options.
  • The systems being optimized are massive and expensive. This means the higher cost of ASIC development is easy to justify.

Being an ASIC company, the above three points are a thing of beauty. We explored several architectures and technologies that are relevant to address the specific needs of deep learning algorithms. 2.5D packaging, silicon interposers, HBM2 memory and the required PHYs, controllers and custom on-chip memory were all topics that got a lot of air time. Those specifics will be the subject of future posts.

What is of particular interest is the technology overlay between high-performance networking and deep learning. There are similarities and there are differences. Our presentation was entitled “Enabling Technology for the Cloud and AI – One Size Fits All?” so we spent some time on this topic. There is a clear opportunity for leveraged learning here. There are also new and significant challenges for silicon implementations of deep learning. These will form the basis for future innovation. The following slide captures the situation:



Leave a Reply


(Note: This name will be displayed publicly)