What’s Next For AI, Quantum Chips

Leaders of three R&D organizations, Imec, Leti and SRC, discuss the latest chip trends in AI, packaging and quantum computing.

popularity

Semiconductor Engineering sat down to discuss the latest R&D trends with Luc Van den hove, president and chief executive of Imec; Emmanuel Sabonnadière, chief executive of Leti; and An Chen, executive director for the Nanoelectronics Research Initiative at the Semiconductor Research Corp. (SRC). Chen is on assignment from IBM. What follows are excerpts of those conversations, which took place as a series of one-on-one interviews.

Left to right:  Luc Van den hove, president and chief executive of Imec; Emmanuel Sabonnadière, chief executive of Leti; and An Chen, executive director for the Nanoelectronics Research Initiative at the Semiconductor Research Corp. (SRC).

 

SE: What are some of the big challenges on the R&D front?

Van den hove: In terms of the technology challenges, it’s clear that the node-to-node transition is getting more and more difficult. To a large extent in our discussions, we come down to lithography. We know EUV has been slow in moving into production. As a result, the technology becomes more complex. With all of these multi-patterning options, the cycle times become longer. So it has become hard to scale. Having said that, I’m convinced that we are not giving up on scaling. With EUV coming on line in early 2019, we are going to see a re-acceleration of that, because of the lithography capability. Now, having said that, it’s clear that it’s not the same story as it was 10 years ago. We need to link the systems view to steer the technology development. It’s not a one-dimensional roadmap anymore. We will see a diversification of technologies, depending on which part of the system we are looking at and which part of the system we want to scale. So we need a portfolio of technologies, which we need to combine. So, one device is not ideal for everything in a system.

Sabonnadière: We are in front of some big things. For big things, you need collaboration. In the past, the idea was, I do it on my own and hide what I’m working on because I want to be on top. Now, that’s over. With the new geopolitical situation, we have to rethink how we collaborate at different levels. That’s what we are pushing today. The big challenge is to collaborate more.

Chen: People continue to push scaling, but it’s obvious that we really can’t push that much further. For many, scaling is not really a key driver. Many are now looking at new computing paradigms and new functions like AI and quantum computing.

SE: Artificial intelligence (AI) is a hot topic. One part of AI is called machine learning. Machine learning makes use of a neural network in a system. In neural networks, the system crunches data and identifies patterns. It matches certain patterns and learns which of those attributes are important. How is AI and machine learning changing the way we look at chips and systems?

Van den hove: In machine learning, you really need to understand the systems part. But the solutions also have to come from the technology side. We have to come up with new architectures for these AI processors or accelerators, which, for example, have a lot of embedded memory. So we are developing specific MRAM solutions that allow us to store the weights right into the processor. These are technology optimizations, but you need to understand what are the system demands. And they are different for these applications compared to others. We are also working on these neuromorphic computing engines, where we will use some of these of new emerging memories like phase-change or FeFET-based memories.

Sabonnadière: I see an explosion of data generation. AI is a solution for that. It will become much more powerful than what we can imagine today. We see that AI will resolve difficult problems. This is a big story at Leti. We have reassigned a lot of experts to AI. It’s a multi-disciplinary story. We are working on different layers of the device. Spiking memories is one part. We’ve know about this technology for several years. We are also equipped to push and create more momentum around edge AI. Everyone is taking about this edge AI story.

Chen: In SRC, we focus a lot on neuromorphic computing and AI. There is also a lot of memory-centric computing. People have used logic devices for computing for a long time. But there is a lot of tasks that are data intensive, so utilizing memory functions for logic and computing capabilities is another area that’s popular today. So, there are two things I see. One is machine learning, AI and neuromorphic. They all are interchangeable terms that people use. The other area is utilizing memory for computing or data-intensive processing.

SE: What are the big challenges here?

Van den hove: When we talk about AI today, most of AI is run as software or as algorithms in the cloud, typically on GPU-like processors that consume a lot of energy. It’s clear that the next big thing is going to bring AI close to the sensor nodes or close to the edge or far edge. There is a reason for doing this—in many applications, we don’t want to generate all this data through the sensors and then send them through the cloud, which consumes energy. Then, you process it in the cloud and it comes back to the sensor node. A lot of energy is also wasted. There are many reasons to bring the intelligence to the sensor node. But in order to do so, we need AI engines that operate at a much lower energy.

Chen: There is nothing fancy about machine learning. The algorithms have been there for many years. It’s the realization that a GPU can do those algorithms efficiently. That changed everything. Now, that still is not as efficient and it’s far from the brain. People are always pointing to the brain as the model. That’s not because the brain does better than the machine. The brain is much more efficient. So, GPUs consume a lot of power. The idea is can we do the same function, but more efficiently. At IBM, we have a phase-change memory group. They use phase-change memory to imitate the synaptic functions. That can be a more native implementation of neural algorithms than a GPU version. GPUs are already better than CPUs. From GPUs, there is also ASICs. All of those things–CPUs, GPUs and ASICs–are all based on CMOS technologies. They are not designed to be neural components. So now finding a device that can become more of a neural component is the next shift. That’s what people are working on.

SE: Any other challenges?

Chen: It’s easy to describe the requirements. For example, you need many layers. In a neural network, it’s pretty much involves weights. The weights have to be modulated with a very high precision and remember the states there. The weights go up and come down. It has to be symmetric. Those are simple requirements. But it’s hard to find a device that can fulfill those requirements. For example, a floating gate is a device that has been used for more than three decades just for that purpose. As you charge more, the conductance may change. But a conductance change is not linear and not purely symmetric. Now, you look at ReRAM. ReRAM is a filament technology. You can make it stronger or weaker. But it’s never symmetric. It’s never gradual—it’s an abrupt change. So once you deviate from that symmetric gradual change curve, you no longer have the same accuracy you are looking for. So people are talking about trying to make a device to fulfill that promise. It’s actually hard to do.

SE: We hear a lot about in-memory computing or processing in memory. In-memory has various meanings, but in some circles, the idea is to perform the computational tasks within a memory, right?

Van den hove: We need specific accelerators as we call them. It’s either memory in processing or processing in memory. There are two options. The first version I would say is to bring the memory in the processor. A lot of the big challenges with these AI engines is you have a lot of these multiply-accumulate operations, which you have to execute. In order to do so, you have data and you need to multiply them with the weights. The weights are stored in the memory in a classical von Neumann architecture. You always have to fetch those weights, bring them into the processor, do the multiplication, and the weights are stored again. This creates a lot of traffic. So what you want to do is to bring the weights right in the memory. So that’s what you call memory in compute. This involves a lot of embedded memory, so then you can have millions of weights. These are huge amounts. They are embedded in the memory so you don’t have to go off chip to access those weights. That is memory in compute. Then, the compute in memory is the next version. That’s what we call neuromorphic processes, where you have some of the concepts of these crossbar memories like phase-change memory. You also can do it in a FeFET technology. When you have a matrix of memory cells, you can store the weight, for example, in the resistive element of a memristor. So you can have a weight stored right at the matrix of each crosspoint. Then, you can do these accumulate operations right in the memory. So you can embed the algorithms in the memory—you do the computations in the memory.

Chen: When you have a traditional CPU architecture, you have DRAM, SRAM and different data storage devices. Closest to logic and memory, you have some logic. Those logic gates or circuits operate at the nanosecond to picosecond range. But in data storage, where you have DRAM, the access time is at the nano to microsecond range. The data transfer rate is even a longer time. It’s like you write a letter in a day, but you take a year to send it out. They need to bring the data next or in the same place where the processing happens. So, we have a large memory array. The memory is not only to store data, but it can also process data. So that’s the idea. You don’t have to move the data back and forth. You use the data that’s already in the memory. You process it and it still stays in the memory. You avoid the data movement. That’s one idea. The other idea is that you have a memory array. A memory array, unlike a logic gate, has some intrinsic parallelism built into it. DRAM or SRAM have a lot of repeating structures. And those can do access functions in parallel. So utilizing the intrinsic parallelism in the memory array to make the processing faster is another idea in this so-called in-memory computing. In-memory computing is a very broad term. In a sense, neuromorphic is another particular example of in-memory computing. In our brain, the synapses is storing the weights and also processing the weights. But so-called brain in memory computing has to meet certain algorithms as well as certain weight and device requirements. Some other in-memory computing applications are not like that. It could be a binary operation within in-memory computing, which is simpler to implement than neuromorphic. So there is an overlap and a difference between these two terms.

SE: There is a perception that Europe is behind in AI, compared to the U.S. and China. China is pouring billions of dollars into AI. Recently, Imec and Leti announced a partnership in the development of AI and quantum computing. Is the Imec-Leti partnership a way to speed up the development for AI in Europe?

Van den hove: In terms of what I would say as a generic AI in Europe, we don’t have a Google. We don’t have a Facebook, Tencent, Alibaba or Baidu. It’s clear that this is not the strength of Europe. In Europe, we do have a lot of applications that need AI, especially the more de-centralized AI. This is for the automotive and healthcare industries. By joining forces with Leti on some of these technologies, and especially also on the next big thing like quantum computing, we need to connect some of the strengths we have in Europe. It makes no sense to have competing research organizations looking after the same things. We had better join forces and make sure we have more critical mass. For me, it’s not so much a matter of who is ahead. We are global. We work with everyone in the world. I am interested in how we can contribute.

Sabonnadière: Your question is correct. Money is not everything here. It helps, but it’s not everything. We see China and the U.S. They are extremely strong in AI with the learning part in the cloud. But then, you have AI integrated on the chip itself. Working on that part, we are probably all on the same starting line. We still believe that we have momentum for that.

SE: What about quantum computing?

Van den hove: Quantum computing is still far away. Considerable progress has been made, mostly at the universities. But there is a big difference in demonstrating a couple of qubits working together, versus doing that in a reliable way with the right process control. That’s where Imec comes in. So, we are working mostly on two concepts. One is superconducting qubits. The other one is silicon-based qubits. We also do some work on photonics-based qubits. We are putting most of our focus on silicon-based qubits, because we think they are the most scalable. With silicon-based qubits, there is an opportunity to scale them to smaller dimensions. At the end, we will need like a million qubits that will need to work together.

Sabonnadière: You’ve seen a lot of teams that started very early with superconducting and cryogenics. You are using the power of quantum technology. But somewhere, it’s not practical. We want to do it differently. We started with silicon qubits. As a whole, the industry is late in terms of numbers of qubits produced, but the speed of integration will be much faster. Probably, the need for quantum computing is on the horizon within the next five to ten years. So, we have to prove within the next three to six years that we can develop numbers of qubits. That’s the reason for the collaboration with Imec. Somewhere, we will put the teams together. This will be the right time to accelerate on the hardware part. And in parallel, we have to solve the software story.

SE: Where does advanced packaging fit into the equation?

Van den hove: As I was talking earlier about having a diversification of multiple technologies for specific parts of the system, you have to bring it together. 3D heterogeneous integration is the way to bring it together. Also, with all of the miniaturized devices you have today like a smartwatch, advanced packaging technology becomes so important for system scaling. And we’re increasing our activities in that arena. We have several integration schemes like through-silicon vias. We have known about that for many years. Stacking memories is another technology. Then, you have memory on logic.

Sabonnadière: For packaging, we started an alliance with Fraunhofer in June of 2018. We need strong packaging technology and to implement that into the automotive industry or in IoT. So, we have various players that will work with us on 3D packaging.

SE: Briefly, what else is interesting in the R&D lab?

Van den hove: Life sciences is one of the topics where I see the biggest growth opportunity. This is a huge market.

Sabonnadière: We have demonstrated that FD-SOI can move down to 10nm. 10nm has been reshaped and they call it 7nm. There is a still need to develop some boosters to move it from 12nm down to 10nm/7nm.

Chen: One thing is about workforce development and student education. That’s very important, because we are looking at a lot of paradigm shifts. Are we educating our young engineers and students in the right direction? That’s a big question for policy makers and top management to think about.

 

Related stories:

The Next Semiconductor Revolution

 

AI Architectures Must Change

Processing In Memory

Quantum Issues And Progress

More 2.5D/3D, Fan-Out Packages Ahead

 



1 comments

Xprmntl says:

The SRC, NRI program was morphed into the nCORE (nanoelectronic COmputing Research) program as of early last year. See: https://www.src.org/program/ncore/

Leave a Reply


(Note: This name will be displayed publicly)