What’s inside the package, what’s the goal, and how this technology is evolving.
I discuss AI and deep learning a lot these days. The discussion usually comes back to “what is a deep learning chip?” These devices are basically hardware implementations of neural networks.
While neural nets have been around for a while, what’s new is the performance advanced semiconductor technology brings to the party. Applications that function in real time are now possible. But what exactly does a deep learning chip look like? First of all, designs targeted at deep learning are typically not chips, but a collection of chips in an advanced package.
If you look inside a deep learning design, you typically find HBM2 memory stacks, along with the associated HBM PHY and controller. High-speed SerDes is also typically needed for off-chip communication. The deep learning chip has optimized multiply-accumulate functions — many of them. These designs have a need for specialized on-chip memories to optimize power and efficiency.
To exploit advanced silicon technology, customization to optimize deep learning algorithms is a good strategy. That means building ASICs — good news for companies like eSilicon.
Performance requirements typically result in the use of finFET technology, and that can make things more complex. Customizing memory for the multiply-accumulate design is another complex requirement. Tying HBM memory stacks to the ASIC requires very high-performance circuits — something not everyone is good at. Integrating multiple components on a silicon interposer in a 2.5D package is the typical approach for these designs, which has its own set of challenges. Thermal and mechanical stress must be considered. And testing these devices requires some new techniques, as does the design of the interposer.
Getting all this done requires a network of partners. IP is usually sourced from more than one vendor. Fabrication of the chip is done by a foundry, but HBM memory stacks, interposers and 2.5D packages typically come from other vendors. It takes a well-coordinated network of suppliers to successfully build one of these devices.
All this is why eSilicon is hosting an event at the Computer History Museum in Mountain View on March 14. We’re working with Samsung Memory, Amkor and Northwest Logic to show our guests how a team of partners can build deep learning ASICs. We also have a keynote address from Ty Garibay, the CTO of Arteris IP. There will be good wine and food, too. Check out more about the seminar, or register to attend. Hope to see you there.
Leave a Reply