Rethinking Processor Architectures

General-purpose metrics no longer apply as semiconductor industry makes a fundamental shift toward application-specific solutions.

popularity

The semiconductor industry’s obsession with clock speeds, cores and how many transistors fit on a piece of silicon may be nearing an end for real this time. The IEEE said it will develop the International Roadmap for Devices and Systems (IRDS), effectively setting the industry agenda for future silicon benchmarking and adding metrics that are relevant to specific markets rather than creating the fastest general-purpose processing elements at the smallest process node.

The move could have a significant impact as companies from across the industry begin zeroing in on what comes next. Key players behind the International Technology Roadmap for Semiconductors, which sets the stage for semiconductor development for the next several process nodes, are now working on the IEEE’s IRDS. They are scheduled to meet in Belgium next week to begin hammering out a new set of metrics that are more relevant to various end markets. They also are forming working groups across a number vertical market segments to handle everything from neuromorphic computing to cryptography, video processing and big data, and adding advanced packaging to each of them.

“We are going to define different application domains, and in those domains there will be very different selection criteria for architectures and devices,” said Tom Conte, co-chair of IEEE’s Rebooting Computing Initiative and a professor of computer science and electrical and computer engineering at Georgia Tech. “For machine learning, we’ve been running those as convolutional neural networks on GPUs, which is inefficient. If you implement those directly as analog they can be 1,000 or more times more efficient—maybe even more than that. And if you’re doing search optimization or simulation, you want a different architecture that does not look anything like high-performance, low-power CMOS.”

That view is being echoed across the semiconductor industry as the smart phone market softens and fragments, with no single “next big thing” to replace it. Instead, markets are splintering into a series of connected vertical slices, generally defined as the Internet of Things but with no clear economies of scale for developing complex chips.

“The real frustration, and opportunity, is in these trillions of nodes where the requirements are ultra-low cost and ultra-low power,” said Wally Rhines, chairman and CEO of Mentor Graphics. “How is the semiconductor industry going to generate revenue from trillions of things or thousands of cents? Nearest term, the one that offers the most potential is embedded vision. They’re willing to pay a little more. The amount of signal processing at the point of sensing is vague, the transmission of the information is vague, and the data center architecture is probably the worst architecture ever conceived for image processing, pattern recognition and things like that. We’re on the edge of a revolution. If you just look at memory, 50% of all transistors were used for logic and 50% were used for memory. Today, the number is 99.8% is for memory, because every day 300 million pictures get uploaded to Facebook and people are shooting video at phenomenal rates. In fact, your brain storage is half visual information. The whole computing infrastructure is not prepared for that.”

This problem is being reported in other market segments, too.

“We’re seeing this is in augmented reality, as well,” said Zach Shelby, vice president of IoT marketing at ARM. “There is really interesting stuff happening on gesture recognition and 3D cameras. The challenge of moving natural gesture recognition and pattern recognition and even more complex things in downloads from those sensors will get really interesting. There will be new computer architectures. How are we going to deal with deep learning at the edge? How are we going to compute efficiently? Our existing computers are really horribly suited to doing deep learning now.”

Uneven market growth
What makes the IEEE’s initiative particularly attractive to many chipmakers is that it will simplify what’s important to tap new markets. Many of these markets are in different stages of growth and maturity, and different technology can have a big effect on how well new technology is adopted. One of the problems with early wearable devices, for example, is that they tried to use existing chips and IP, forcing users to charge them every day or two.

Not all markets move with the same velocity, either. Medical is seen as one of the big market opportunities for the future of connected devices, but it hasn’t moved anywhere near as fast in adopting new technology as industrial markets, where there was an instant payback for connecting valves and machinery to the Internet.

“Some areas are growing faster than others,” said Kelvin Low, senior director of foundry marketing at Samsung. “Volume consumption of wafers is not too attractive because this isn’t as much as for consumer devices. There are many future use cases that hopefully will consume more silicon. One of the key responsibilities we have as a foundry is to enable design innovations from paper to silicon. It doesn’t make sense just to push 14nm for all applications. There are many IoT applications that can use different processes. Use cases are changing rapidly.”

Perhaps the biggest shift in some markets is determining what is the starting point for designs. While traditionally it has been hardware defining the software, increasingly it is the other way around.

“We’re seeing more and more hardware being defined by software use cases,” said Charlie Janac, chairman and CEO of Arteris. “It’s not always the case, but it’s happening more and more. This is software-defined hardware. The big change is that before, people would put out a chip and the programmers would write software for it. This is a big change.”

ARM’s Shelby believes this ultimately will change even more in the future because the number of embedded developers is growing while the number of hardware engineers is flat to shrinking. “We’re seeing estimates of 4.5 million developers getting involved in the IoT by 2020,” he said. “These developers are demanding a whole stack out of the back. They expect security to be there. They don’t want to write a TLS (Transport Layer Security) stack or develop a crypto device. And we’d better deliver it or they’ll go somewhere else.”

But there’s a downside to that, as well, which means it doesn’t work across the board. “Software costs power,” Jim Hogan, managing partner at Vista Ventures. “If you have software-defined hardware, you’re going to use more power.”

Thinking differently
What’s required is a rethinking of how all of the pieces in a system go together, and why architectures are the way they are in the first place. In many respects, the industry has been stuck on a straight rail since was first created.

“In the old days, Intel said what its next processor would look like and that was it,” said Jen-Tai Hsu, vice president of engineering at Kilopass . “And until recently, a high-tech company would come out with a spec telling people what to do. But the new trend is that people are defining what they want, not a high-tech company. The whole IoT is the next big thing. It will be difficult to unify and very difficult for dominant players to fill the market. In that model, the innovation is coming from the people and the tech companies are executing their vision and competing for a spec.”

While some companies will continue to push to 5nm, and probably beyond, device scaling isn’t relevant for many solutions-based approaches. Even where it is relevant, such as for FPGAs, they may be replacing ASICs in some of their traditional markets.

“There is a cloud services company that does 100% of its searches right now using programmable logic,” said Brad Howe, vice president of engineering at Altera (now part of Intel). “That’s one alternative, using non-traditional architectures to accelerate the infrastructure. They need to be optimized. That’s all about narrowing the application space a little bit. You can’t have a standard product for 10,000 different applications. But if you look at a data center, you can narrow that. And if you look at power efficiency, that’s a really big deal. FPGAs are inherently power-efficient, but when you accelerate it, you can reduce power by 10X to 30X over what you’re doing in MOS. It’s inherently much more power efficient than software.”

It works the other way, too. Rather than a full-blown microprocessor, some functions can be handled more effectively by microcontrollers, particularly for edge nodes. “We’re going to have 64-bit microcontrollers that can really do some intelligent things,” said Hogan. “But the code stack is going to be pretty limited.”

This is the starting point for the IEEE’s IDRS. Conte said that instead of just mapping process nodes, materials or lithography, metrics will match the market. So for example, rather than just measuring general cycles per second, for pattern recognition, that would be replaced by features recognized per second or simulation steps per second. For cryptography, it would be codons per second. And for big data it would be access rates, taking into account the memory subsystem rather than thinking of that as a separate add-on. And, as Conte noted, much of this could completely change—particularly in the security realm—with the introduction of quantum cryptography.

Conclusions
Many of these changes have been underway for some time, but without any top-down direction from the semiconductor industry. The IEEE’s move to establish what amounts to best-practices and recommended architectures for individual markets adds some much needed structure. From there, economies of scale can be reestablished in ways that make sense.

“Whenever a shakeout occurs, we keep improving our ability,” said Mentor’s Rhines. “Someone comes along and puts together a solution that doesn’t do everything you need, but it does enough that it captures share, builds the volume that allows you to drive down the cost further. So now you have a standard product. I expect that will happen here, too, but right now we don’t know which one that will be.”

In the meantime, though, the uncertainty has a lot of people talking about what comes next. ” There is more collaboration between silicon providers,” said Samsung’s Low. “In any market there will be a lot of new use cases that are not available today. Every part of the supply chain has to come together.”

Related Stories
10nm Versus 7nm
Rightsizing Challenges Grow
Where Is Next-Gen Lithography?
New Memory Approaches And Issues
Why Roadmaps Matter The IEEE’s plan to add structure for individual markets is an important step.



2 comments

Kev says:

“For machine learning, we’ve been running those as convolutional neural networks on GPUs, which is inefficient. If you implement those directly as analog they can be 1,000 or more times more efficient—maybe even more than that. And if you’re doing search optimization or simulation, you want a different architecture that does not look anything like high-performance, low-power CMOS.”

– in that vein: what happened to the memristor?

However, since the CS guys can’t get past 1s and 0s the idea of programmable analog things probably isn’t on too many peoples’ minds – not to mention the “End Of Mixed Signal Engineering?”. Certainly, in the last few years of Silicon Valley networking, I have not heard anybody (else) suggest an analog approach to NNs.

Ed Sperling says:

There is work going on at the university level, but aside from that we haven’t heard anything, either.

Leave a Reply


(Note: This name will be displayed publicly)