The Next Semiconductor Revolution

Four industry experts talk about what’s changing, how quickly, and where the limits are with AI.

popularity

What will drive the next semiconductor revolution?

When you ask people with decades of experience in semiconductor manufacturing and software development, the answers include everything from AI and materials to neuromorphic architectures.

Federico Faggin, designer of the world’s first microprocessor; Terry Brewer, president and CEO of Brewer Science; Sanjay Natarajan, corporate vice president at Applied Materials; Michael Mayberry, CTO at Intel, sat down to talk about what’s next as part of SEMI’s Industry Strategy Symposium, which was moderated by Dan Hutcheson, president of VLSI Research.


L-R: Federico Faggin, Terry Brewer, Sanjay Natarajan, Michael Mayberry.

Q: Do you see similarity with AI and Intel’s 4004 development?

Faggin: The 4004 was something that showed that you could put a CPU on a chip. And to do that, the silicon gate technology was fundamental because it provided at least a factor of 10 in speed and a 2X factor in density. It was just right to introduce the first microprocessor. But it wasn’t until lithography and the 8080 that the performance was at a sufficient level and there were a sufficient number of applications to really show the world that the microprocessor had arrived. That was also one of my projects and my ideas at Intel. So then I worked in neural networks back at Synaptics, the company that I founded with Carver Mead, where we did computation with computers by using analog as opposed to digital, because analog used less power. You can multiply every one of the processors. It had tremendously more functionality per unit of power that you can get. So I’ve seen these things growing. And now there is incredible progress. The CPUs provide zettaflops galore. This technology is just in the beginning of our understanding of what is really required in terms of creating cost-effective and power effective solutions.

Q: You were doing TCAD transistors, and scaling the MPU provided incremental gains. Now, we’re trying to emulate the brain, and if you look at modern AI, there are a lot of problems.

Faggin: The fundamental problem is that, because of the recent successes, people are extrapolating and promising things that are just not possible. AI is still based on neural networks. In fact, this is based on a simulation in a computer, which obeys a mathematical model that tries to capture what our brain does—which is done completely differently than what we do with computation. So it’s two steps removed from the actual operation that happens in the brain. But most important, we are conscious beings and robots and computers are not conscious. And I have a good theory why that will never occur. But we are promised conscious robots in four years – conscious computers in four years. That’s not in the cards. Not even close. No neural network can comprehend anything. So the moment that you move away from a certain set of data that it can learn—patterns and the correlations of that data – that mechanical structure is completely clueless as to what to do. Even self-driving cars require there to be this common sense sort of low-level comprehension.

Q: When you look at where the industry is going, where do you see problems that need to be addressed?

Brewer: I’m a real fan of AI. We’re going be a user and also an enabler of AI. And in both cases, the value that we’ve brought is continuous improvement. We’ve done Moore’s Law. We’ve done more transistors and we’re doing 3D. It’s more expensive. And the problem is, we’ve been on a continuous improvement pathway in this industry for a very, very, very long time. We need a disrupter, and AI can serve as that disrupter. For our industry, I see it as very important and very valuable that we move off the path of continuous improvement. Where does the future lie? One is in the use of it. Right now we’re a materials supplier in the industry. In the old days, the customer wanted 5-6 parts of sodium in our material. That is a recognizable entity. As time goes on, it now becomes five parts per billion. Okay, I can see five parts per billion. It probably does exist. But we’re getting into trouble because we’re describing it as a pathway for a single point, and we’re getting closer and closer to where those points aren’t there. So now if you keep on with continuous improvement, it’s going to be five parts per trillion. Well, are we talking about reality or probability? Are we talking about a curve or a surface? It’s time to get off the continuous improvement bandwagon and get on to the disruptive difference. We have to do our role, but we can’t do it by looking at 120 or 200 points. You have to start integrating it by way of fingerprints. So we fingerprint the data. And then we have to analyze the fingerprints. We’re really close to wanting AI in the simplest forms do the analysis so we can provide the product that is reliable. We’re getting outside of point definitions for quality and for defects. We also are enablers for the AI, because for the last 10 or 20 years we have been developing carbon sensor technology. And carbon sensors represent quite a big change in capability from other types of sensors. There are a lot of the benefits. It’s supposed to be easy to work with. It isn’t easy, but it’s workable. You can make parts and devices with them. You can put in flexible substrates. And so we’re building sensor arrays that can do five or six different properties thus saving time.

Q: We’re getting away from thinking about incremental change to going to a completely new way of thinking.

Brewer: And it’s disruptive. We are at a spot where we can move past seeing the wafer as an incremental piece. In fact, it’s worse than that. Some people make microprocessors. Some people make memories. We’re going to look back on that someday at what a simplistic, archaic view of the world that is. I hope AI is the disruptive piece that will help change our industry.

Q: Moore’s Law is slowing. What implications does this have?

Natarajan: For 40 years we took for granted what Moore’s Law brought to us. In 1970, a chip had about a thousand transistors. In 2010, the state of the art has 1 billion transistors. You got a factor of a million in 40 years. And if you do the math, we got 1.997 times transistors every two years. You also got a corollary benefit of about 15% more energy efficient computing every year. Most of the world took it for granted. And they took the benefit that came with it for granted in terms of what it enabled in AI. AI started in 1956. That was the first use of the term at Dartmouth College, which coined the term ‘artificial intelligence.’ So it’s been around a long time. Even in the ’70s and ’80s, you had self-driving cars that kind of sort of drive to work, but not really. The concept for Alexa was around 40 years ago. It’s just that computing wasn’t powerful enough. But if you extend that to 2010 up to now, you see a pretty ominous situation. Computing power has significantly stalled in the past seven or eight years. That is really going to have severe consequences for the future of things like AI, which are really just getting started. Alexa is more or less just annoying me at the moment. Hopefully, it’ll do something truly useful 10 or 15 years. But the key to doing something useful is really to continue driving Moore’s Law, or some version of Moore’s Law, where you reliably get more efficient computing every year. And you get more dense computation per square inch. We’re in trouble on those counts.

Q: There are changes at the equipment level in what Applied calls Integrated Materials Platforms. How is that different?

Natarajan: If you look at the whole stack of AI, and you accept the premise that things are sort of in trouble, then you have to question assumptions up and down the stack. Where I’m operating right now is at the bottom. For most of the past 60 years, AI was done on a traditional CPU. The original versions were implemented in computers. It’s what we had at the time—take what’s off-the-shelf and write the code. We got a little smarter at that highest architectural level at one point. Today, we’re just doing matrix math. So why not use a graphics processor, because that’s faster? This was an evolutionary step in a more power-efficient way to implement AI. Today we’re seeing the next step, where we are doing more purposeful computers. Looking ahead to a traditional von Neumann machine, where you separate computation and memory, doesn’t make a whole lot of sense. That’s not how our brain works. To the extent we can understand it, instead of simulating neural networks, maybe we should really be implementing that. Those paradigms are getting smartly challenged at the top of the pyramid for AI. If you go down to the bottom of the pyramid, where the fabrication is being done at the equipment level, at the integration of the equipment in a fab, the assumption has not been challenged sufficiently. We still fundamentally build those chips the same way. Think about depositing, modifying, removing and measuring materials. We still think about some of the discrete functions that happen in discrete tools. That’s broken moving forward. There’s a fundamental need to continue to making progress and to rethink that assumption. That’s where the concepts and the tone of integrated material solutions comes from. Let’s stop thinking about those things we do as different things, because we can unlock a lot of capability if we are willing to be much more intellectually creative at that level.

Q: So how will computing change?

Mayberry: Six decades ago when AI kicked off, it was around symbols, rules and logic, which turns out to be really good for a math person. We can construct things at the discrete level. There’s certainty in it. There’s predictability. We can hide the variance, so to speak. But that turned out to not scale particularly well. I would argue it didn’t scale very well because it’s very hard to extract the expertise of an expert. Anybody that’s done a press interview should appreciate that. You ask some questions, and if you don’t ask the right questions you don’t get the information that you need. I’ll give a real-world example that actually happened with some of our test autonomous vehicles. The vehicle said, ‘The traffic has moved to the right and there’s an opening. I’m going to merge into traffic.’ But maybe there was an ambulance coming down the road. The concept of an ambulance was not something that had been encapsulated in the rules and the symbols. It’s not, per se, a failure of the system. It’s a failure of the imaginers of the system to comprehend all this. So the expert system failed mainly from an ability to extract data. We sold a lot of CPUs that went into those workloads, and we continue to sell a lot of CPUs that do things like that—even though we mostly don’t call that AI. The second area is really around learning of data—pattern matching, statistical learning, a bunch of things. That’s not new to us either, because fabs are an enormous generator of data. So we’ve been doing machine learning, modeling and all that for decades now. We didn’t call it AI. What really triggered the invention is the rise of labeled data. Without the Internet there wouldn’t have been enough data to create those useful models. Human experts labeled the data, allowing people to then build data files. And yes, you need faster hardware and the right statistical algorithms. So where do we go from there? Pattern matching is more or less of a bottoms-up part, and the symbolism is the top part. There is a vast gulf in between the two, and that’s an interesting area for research—an interesting area for what the data needs to look like. It’s an interesting area for what the physical hardware needs to look like…Humans are good at transferring knowledge from one domain and saying, ‘Hey, this looks similar to something that I’ve seen before.’ Again, there are interesting things there. But part of the answer to that is how you represent knowledge. If you represent knowledge in a very static structure, then you don’t have that cross-domain ability to transfer, mapping of one problem into another area. And then finally, humans are good at creating knowledge out of nothing. Yes, they’re building on the stuff that they’ve done already, but so far machines are completely unable to do that unless you bound the problem so much like a game where you have perfect knowledge of the current conditions. You have a perfect knowledge of the rules. And then you can create new knowledge out of that. So that piece is the furthest away. It’s the hardest to do. But it’s the most interesting of them all. And that’s of course what scares people. If machines are to create knowledge, are they going to create knowledge that we would consider good or bad?

Q: At the device level, Intel has introduced interesting new technologies that incorporate spiking neural networks. How do those change the game from then trying to take a traditional CPU and loading it on a neural network.

Mayberry: Let me clarify a little bit. The first analog neural network chip was 1989. It was 65 neurons, about 10,000 synapses. It was based on EEPROM technology. It didn’t go anywhere. It was too early. People didn’t know how to use it. What we’re building today is like that, but we actually have a digital implementation of the spiking neural net. It is not a clocked system where everything is synchronized. It is a free-running, able to be programmed recurrent neural network, and so it brings time into what is today’s neural network technology. That opens up new avenues, because we don’t train it with back propagation. You don’t have to be able to do mathematical derivatives on it. Now we have to figure out the best way to train it, but it’s showing very promising results that are in some ways unexpected. And we’re finding that you can use optimization around lots of multi-interactions and variables.

Q: A lot of what you’re all talking about is raising the level of abstraction in system design. How will that be done, and what sort of problems are you anticipating?

Natarajan: There are technical challenges and non-technical challenges. Non-technically, the success that got us here is not going to help us to move forward. We settled into a pyramid that frankly has worked extraordinarily well for this industry. We start with the material, the equipment the fabrication and move up the stack. It’s hard to disrupt that, because it has been so wonderfully successful. I can’t imagine another industry that has evolved this far, this fast, in a short period of time. But I believe, especially in the context of AI, it’s not going to get us to the next plateau that we need to get to. There a lot of very human challenges to breaking down those paradigms and coming up with a completely different way to work on this problem. And you cannot minimize the technical challenges and invent new ways to manipulate materials at the atomic level. A simple example is a different kind of switch. I can do this by the end of the day in PowerPoint, but actually making it is going to take thousands and thousands of human years of blood, sweat, and tears.

Faggin: Living systems are the example that we need to learn from. How do you keep information processing? Living systems are exquisite information processing systems. These are not classical systems. They are systems of a kind we are still trying to figure out. So to me, the next step is not going to come from silicon. It’s moving from our way of understanding systems to an organism, which is a living thing where material flows in and out of a cell. Two days later, the atoms and molecules of that cell are not the atoms and molecules that were there. Is that what happens in a computer? No. The atoms and molecules of the computer are the ones which it was built with. So you use that as a scaffold and run signals over it. Life doesn’t work that way. Why are we smart? Because we are based on the stuff of life, and that’s next frontier. In AI, you force it to understand the difference between a living system and a mechanical system. We have convinced ourselves that our minds are like computers. That’s not even close. We have emotions. Where are the emotions in computers? We have thoughts. Then we have sensations. When I smell something, I don’t have a simulator to tell me it’s a rose. I’m actually smelling the rose. It is a sensation. That’s what consciousness does. I urge you to look beyond the way we do things. The real stuff is living systems.

Brewer: I agree, but we’ve got to take it in stages. I hope there’s AI in that first step. And if we can make that first step, we should be pretty excited about it. I’m not sure about duplicating biological systems in the near future. But AI could be that first step, and we should be excited. We process one, two, or three pieces of data, but our brain processes thousands of pieces of data, and very quickly it comes to a conclusion. AI is just starting on that pathway.

Mayberry: The more ignorant you are, the more sure you are of your opinions. And as I learn more about AI, I learn the bounds of my ignorance more and more and more. And the original question was, ‘How are you going to solve all these problems?’ I admit I don’t know. But I’m starting to know what I don’t know, so therefore we can go and work on those pieces. If you have a blind spot and don’t realize it, you’ll never know if it could work on those particular things. So as we put together our roadmap, we’ll delineate boxes—what is well understood and what’s in-between? What are the things that we can begin working on that are not well-understood?

 

Related Stories

Fundamental Shifts In 2018

Some Chipmakers Sidestep Scaling, Others Hedge

AI Market Ramps Everywhere



Leave a Reply


(Note: This name will be displayed publicly)