What Cognitive Computing Means For Chip Design

Computers that think for themselves will be designed differently than the average SoC; ecosystem impacts will be significant.

popularity

Cognitive computing. Artificial intelligence. Machine learning. All of these are concepts aim to make human types of problems computable, whether it be a self-driving car, a health care-providing robot, or a walking and talking assistant robot for the home or office. R&D teams around the world are working to create a whole new world of machines more intelligent than humans.

Designing systems as complex as the human brain — which is still largely a mystery — is no small task. For example, tomorrow’s bleeding edge cars will be the ultimate in efficient system-level design sophistication given the complexity, integration, interdependencies, safety, convenience and comfort required on so many levels.

“It fundamentally changes the paradigm and even what we expect of processors to be doing,” said , a Cadence fellow and CTO of the IP Group. “But far-out models may take us three decades to realize in terms of biological computing, quantum computing or other approaches.”

He noted that the biggest benefit will be on the energy-efficiency side. This is a key aspect to making cognitive systems of the future a reality because all of the extremely sophisticated processing requires energy. Because many of these systems may be untethered, that energy will have to be carefully meted out.

In addition to processing efficiency, the software must be efficient.

“There are lots of things we can think of that would change the way software gets constructed and where the time and energy is spent in your average computer system,” Rowen said. “Just think about what we have done with the last 50 years of computing. It’s not as if we take the old applications and run them eight orders of magnitude faster. What we do is come up with new kinds of applications, which are really new levels of abstraction. We build operating systems and libraries and middleware and application environments and client-server models and cloud device models, and we put more and more layers of software in between what we are wanting to accomplish and what is actually running on the hardware.”

This represents a big change in computing and compute architectures.

“Case in point—50 years ago if you wanted to add two numbers together you literally did something where the computation consisted primarily of loading those two numbers in and doing the add, and there was an add instruction that consumed the power. Now when you want to add two numbers together, what do you do? You go to your cell phone, you discover you don’t have a calculator app, you go to the app store, you download a graphical calculator, you enter those numbers and you do an add. How much of the energy of that whole experience was spent doing the add? Essentially none of it. You spent probably 10 orders of magnitude more energy on getting the capability to do that simple 2+2 than you did in the actual calculation of 2+2. So what we have done is used that 50 years of advances in computing power to give us the convenience of being able to have these lovely levels of abstraction and user interfaces,” he said.

But there are much more energy-efficient ways to do that. “Think about how you might replace those levels of libraries and client/server cloud device dialogs with a tool that essentially recompiles the code on the fly and comes up with the efficient way to do that, and you might be able to save orders of magnitude in terms of computing power to do the kinds of tasks that people actually spent their time doing like using a little calculator on their phone,” he said.

At the design level, this translates to the fundamental hardware not changing very much. This is mostly a software experience. But given the enormous complexity of the systems running today, chipmakers and systems companies need to start measuring where the energy is being spent in the computation. More monitoring and measurement would affect the hardware. It also would give the software developer a much better picture of where the time is really being spent. That, in turn, can be fed back into the software development process.

Cognitive systems also will require sophisticated models. Those are still in the research phase.

“Cyberphysical systems that combine digital behaviors and software, with physical dynamics for things like aircraft or self-driving cars — there we are babes in the woods,” said Professor Edward Lee of UC Berkeley. “We have no idea what we are doing, and a big part of the problem is that we don’t have good models where we know how to construct physical realizations that are faithful to those models.”

In this space, learning will take time.

“If you look at the number of microcontrollers in a car today and how it is designed largely through auto co-generation, and largely single core microcontrollers, more than half of the microcontrollers you’ll find in any standard vehicle today are still 8- and 16-bit microcontrollers, half of them are 32-bit, and a small number are 64-bit,” said Andy Macleod, director of automotive marketing at Mentor Graphics. “The industry is still learning dual-core and quad-core, especially for things like engine management. Then you look at custom cognitive learning processors that have something like four or five thousand cores. You put that in the context of an ASIL-D environment and it looks like an exponential amount of complexity. Cognitive computing looks great, and it’s absolutely valid for automotive. We expect there’s going to be some kind of redefinition of how cars are designed to bring all of that into the vehicle.”

That could happen more quickly than you might expect, too. While the auto industry used to be a lumbering giant, competitive pressure to add new features has resulted in changes across the automotive ecosystem.

“If you look 20 years ago, carmakers and the big Tier 1 suppliers had a lot of R&D in things like sensor body controllers for simple network control,” Macleod said. “Now that is all outsourced way down in the supply chain. Today we have, let’s say, a vehicle where electrical and electronic architecture and software platforms are now considered non-differentiating activity but highly complex. We see that electrical and electronic architecture and software platform development heading the same way as body controllers did 15 or 20 years ago. As such, the big OEMs and Tier 1s are diverting resources to things like cognitive computing and big data, and so on, then outsourcing the platform development. Probably in 10 years’ time, machine vision and other such advanced technologies will be outsourced to way down in the supply chain, as well, so this exponential growth of complexity is redefining who does what in the supply chain. More and more complexity has been outsourced to the systems engineering houses to focus on the functional software, which is where the differentiation is, because putting together an OS for something like cognitive computing is very complex, and not necessarily differentiating. But the functional software that sits on top of that absolutely is, and that’s where the car OEMs want to play.”

Big picture, Drew Wingard, CTO of Sonics likes the term cognitive computing because it readily covers the various flavors of architectures derived from biological systems — convolutional neural networks (CNN) and the like — as well as some of the architectures based on far more traditional hardware (GPU/DSP/FPGA) simply used in new ways.

The biggest difference he sees in architectures truly optimized for the mimicry of biological systems — like CNNs — is the move to highly distributed architectures without conventional notions of a stored program or traditional reliance on shared memory for communication.

“These systems put far higher stress on distributed communications, rather than the intense pressure on external shared DRAM that has dominated existing unified memory in Von Neumann-derived computing architectures. In cognitive computing, there is frequently a separation between the architecture chosen for the ‘learning’ phase—where the coefficients that describe the strength of the relationship between different components are determined to maximize the likelihood of matching the desired patterns—and the ‘execution’ phase, where new inputs are matched based on the determined coefficients,” Wingard explained.

In the learning phase, he noted, communication patterns are being determined based upon the optimization process. High connectivity and flexibility are critical. Then, for the execution phase, a mapping process can use the coefficients to drive an optimization process that clusters highly connected components together. “This can reduce both the required communication bandwidth (saving energy) and enable higher parallelism. Both factors allow the execution architecture to reach higher matching throughput at lower energy.”

“For the SoC designer, both phases lead to SoCs that look very different. They tend to be far more homogeneous, but built from different building blocks. They tend to have highly connected interconnection network architectures—think hypercubes. As such, they are both simpler (due to their step and repeat nature) and more complex (total transistor count, worries about protocol-level deadlocks) than traditional SoCs. Most exciting of all is that there is no obvious set of architectural choices. It is still a green field, and will keep SoC developers and their ecosystems busy for the foreseeable future,” Wingard concluded.

Related Stories
Enabling Self-Driving Cars
Sensors Enable ADAS
A Robot In Every Home



1 comments

Rajiv Mathur says:

Early registration still open NFIC-2016: Cognitive Computing – to the Singularity and Beyond. At Stanford Univ May17, 4-10pm. Speakers from IBM-Watson, Cadence, Nervana, DOCOMO, Stanford, Api.ai, and Sensai. Visit http://www.nfic-us.org for details and registration link.

Leave a Reply


(Note: This name will be displayed publicly)