Surprises At Hot Chips 2016

Companies are crossing lines in ways no one would ever have expected.

popularity

Who would have thought an Intel architect would be on stage talking about cutting pennies out of MCU prices? Or that Nvidia would be trumpeting an automotive SoC whose chief performance advantages come from the integration of ARM CPUs that can support up to eight virtual machines? Or that Samsung would be developing a quad-core mobile processor from scratch based on its own unique architecture?

What was surprising about this year’s conference wasn’t that better, faster chips were being introduced by established companies in their core markets. It’s that so many companies are crossing lines into new markets using new approaches, or at least ratcheting up their commitments to those spaces using bleeding-edge techniques that are still the subject of academic research.

HotChips has always been about the latest and greatest improvements in semiconductor performance. But there was an undertone of change this year that reflects some of the broader shifts underway across the semiconductor industry. More of the same no longer is a guarantee of success, and differentiation is required at every level.

To begin with, device scaling wasn’t on the top of anyone’s list this year. It was a passing mention because that’s no longer a key differentiator. The future is more about architectures and microarchitectures, including multiple pipelines, parallelization, built-in multi-level security, and throughput to memory. And it’s about being able to deliver all of that for an acceptable price.

Screen Shot 2016-08-25 at 8.32.20 AM

Samsung’s quad-core M1 chip (above) is a case in point. It’s based on 14nm finFET technology using both 32-bit and 64-bit ARM cores, with less than 3 watts per core. But the real advances are in areas such as multi-stream pre-fetch, out-of-order instruction execution, low latency LP caches and a neural net-based predictor.

This is hardly business as usual. These are brand new concepts for mobile devices that can be leveraged for a variety of other applications well outside of the mobile market.

Packaging is another ongoing area of exploration and development, which is really beginning to take off with HBM-2 memory from SK-Hynix and Samsung. Xilinx, Nvidia and others discussed a variety of new ways to put chips together that can greatly improve performance and lower power.

Nvidia’s Pascal GPU (below) is a stacked configuration based on an interposer with four-high HBM2 memory connected with a silicon interposer to a six-layer organic package. To deal with warpage, the company has thinned the interposer after it is attached, adding a stiffener ring rather than a lid to control heat.

Screen Shot 2016-08-25 at 8.53.43 AM
Nvidia’s Pascal GPU, diagram on left, actual chip on right.

Intel’s Quark-based MCU (see diagram below) is targeted at edge devices. While the device is still x86-compatible, it also is focused heavily on extremely low power and cost, with an average selling price of between $1 and $1.50 per unit. Intel said it chose some IP based upon price.

Screen Shot 2016-08-25 at 8.59.12 AM

New architectures and microarchitectures were introduced by ARM, IBM, and AMD, as well, each with its own unique approach for improving power and performance—and adding flexibility for how these chips can be used in new markets.

But the real takeaway this year doesn’t involve architectures or pipelines. It’s that established chipmakers are shifting direction and trying new things for new applications such as self-driving cars, while also jealously guarding their core markets with state-of-the-art advancements and techniques. The world is changing quickly, and that shift is starting at the chip level.



Leave a Reply


(Note: This name will be displayed publicly)