Betting Big On Discontinuity

Mentor’s CEO looks at the impact of AI and machine learning, what’s after Moore’s Law, and the surge in EDA and semiconductors.

popularity

Wally Rhines, president and CEO of Mentor, a Siemens Business, sat down with Semiconductor Engineering to talk about the booming chip industry, what’s driving it, how long it will last and what changes are ahead in EDA and chip architectures. What follows are excerpts of that conversation.

SE: The EDA and semiconductor industries are doing well right now. What’s driving that growth?

Rhines: It really is an amazing period. If you look at the semiconductor industry, it’s been a 3% growth per year since almost the turn of the century. In 2015 and 2016, it grew less than 0.5%. And yet, in 2017 all of a sudden it grew 22%. That’s a big jump. And it looks like big growth year in 2018. All of that is passing through EDA. On the semiconductor side, several things came together. Memory shortages led to a price increase in memory that was more than 2.5 times what it was 18 months ago. So while the unit volume for memory did grow, pricing showed a bigger increase. But the interesting part is the growth that occurred because of the whole dynamic of what’s being designed and what it’s being used for. There is a wealth of new startups doing new processes for AI. That’s helping the EDA industry. There’s an automotive boom for driverless and electric cars, and all these companies are doing electronic design. In addition, the automotive industry is buying more semiconductors. And what helps the semiconductor industry passes through to EDA.

Photo credit: Paul Cohen/ESD Alliance

SE: The nature of the companies is shifting, though. In the past, we were dealing with chip companies. Now we’re dealing with system companies. Is this just the beginning of what will be more chip companies, or is it a shift that’s being driven by more complexity at the system level?

Rhines: We’re going through a wave. We do this about every 20 years. In the 1980s, every computer company started building a wafer fab because they thought they could use that to differentiate. Then it went back the other way. This time it’s a little different. The information technology world is very much focused on the information collection, processing and optimization of a whole variety of things. And in many cases, they want to design their own chips. The general system companies have found they can get differentiation and reduce costs by doing their own chips. The Apples of the world have gone from negligable foundry purchases 10 years ago. Now 13% of the total foundry purchases are going to systems companies. They have become designers and users of semiconductors. That’s a big transition. The sales of foundries to fabless semiconductor companies has stayed at a fairly flat 70%.

SE: Three years ago the big story was the number of EDA and chip companies was shrinking. Has that changed? Are we going to see a period of more startups as well as consolidation?

Rhines: The startup piece is the most amazing statistic. After declining for most of the last 15 years, the amount of money in venture-funded fabless semiconductor startups in 2017 was $900 million. In the fourth quarter of 2017, it was almost $500 million. If you look at what they were designing, more than half were various AI processors or blockchain processors or new processor architectures. There will be a shakeout over time, of course, but it may not shake out for quite a while because most of these are for specialized applications. The same is true on the systems company side. I don’t see a time when Apple will go back to buying generic application processors or even turning over the design process to ASIC design companies.

SE: One acquisition that got a lot of attention was Siemens’ acquisition of Mentor Graphics. How has that gone?

Rhines: You’d think when the announcement was made in November 2016 that customers would worry about what’s going to happen and how the strategy would change. That year turned out to be a record year. We’ve completed 2017 and it was even bigger. It was an all-time record, and most of our businesses set all-time records. When Siemens announced the acquisition, they said they were doing it to invest. They have been true to their word. The investment in R&D has caused us to grow our headcount from 15% to 35%. Our turnover has been very close to the historical rate. But our acquisitions have accelerated. We’ve done more acquisitions since we’ve been part of Siemens than we have done in any years since 2002, and three out of four were IC design. We acquired Austemper, Sarokal for 5G test, Solido for simulation for IC and machine learning, Infolytica for system-oriented modeling. It’s great to be part of an organization that can afford to do this. We had a long history of losing out in bidding wars over acquisitions. We don’t have to do that anymore. Where we see opportunity, we now have the opportunity to go after it.

SE: Now that you’ve got a fatter checkbook, what do you do that you couldn’t do before? Does this allow you to go deeper into the system world?

Rhines: We’ve talked a lot about chips, but a significant portion of Mentor’s revenue was from system design, as well. That goes back to the start of the EDA industry in the early 1980s. The Daisy-Mentor-Valid wars were more about system companies than IC companies. The system companies made standardization decisions. Automotive and military/aerospace companies wanted a common infrastructure, common tools and common libraries. Those companies—particularly military/aerospace, defense and automotive—made those decisions. That’s always been about a third of our revenue. It includes all types of system design, wiring, computational fluid dynamics, thermal analysis and printed circuit boards. What’s new today is with the boom in the automotive industry, and all of its tentacles into design of electronics for complex systems. There is a need for systems tools in addition to IC design tools. And there is a need for system design tools that can be well integrated into the rest of the system. You can’t design the wiring for a car in two dimensions. You have to have the mechanical design data because it’s a three-dimensional problem. You have to know the wire bundle will fit through the hole in the door. Wiring is an extremely complex place-and-route problem with thousands of constraints.

SE: The engine of this is the semiconductor, but at time when we need more processing power to handle more data, scaling is slowing down. There aren’t the same kinds of performance improvements at each new node, and it goes down further with 5nm and 3nm. How do you see that changing?

Rhines: As Gordon Moore said, Moore’s Law is an exponential and no exponential is forever. We’ve known that for a long time. We just keep being able to execute on that. But Moore’s Law is driven by feature sizes and wafer diameters. That’s where most of the cost reduction has come from historically. Going forward, it will be other variables that allow us to go down the curve. The cost per transistor is on a learning curve, and that will be here forever. NAND flash is growing to 128 layers vertically with minimal reduction in feature sizes. That reduces the cost per transistor. All of that will continue, and we will continue down that learning curve.

SE: What does that mean for chip architectures?

Rhines: It brings in 3D stacking and all sorts of different ways to provide more functionality for a lower cost. That will continue, as well.

SE: So what happens with neuromorphic computing and other architectures like quantum computing that we never seriously considered before?

Rhines: That’s the exciting part. The von Neumann architecture, which has been with us for most of the history of the computer industry, is very good at computation. But there are things that it is not very efficient at, like pattern recognition. And pattern recognition can be graphical imaging, but it’s also sound and smell. Your brain does pattern recognition very efficiently using not very many cycles. If a friend calls you on the phone, you can tell in the first few milliseconds who it is. Computers are nowhere near that efficient. You have to do things differently. One way is to do more per cycle. In power efficiency, a computer is somewhere between six and nine orders of magnitude less energy efficient than the human brain for the kinds of computations that the brain does well. And so it means new architectures will be accepted. They will have machine learning in most cases, because your brain has the ability to learn from experience. I’ve visited 20 or more companies doing their own special-purpose AI processors. You’ll see them increasingly in specific applications, and they will complement the traditional von Neumann architecture. Neuromorphic computing will become mainstream, and it’s a big piece in the next step in efficiency of computations and reducing cost and doing things in both mobile and connected environments. Today we have to go to a big server farm to do that.

SE: How do we facilitate what amounts to a mass customization approach to design?

Rhines: Customization is very good because it means lots of custom design and custom design tools to design and verify. Makimoto used to talk about waves of standardization and customization. We seem to be going into another customization wave. These things last 10 to 15 years. Then the efficiencies of standardization take over. For EDA this is what we do, and it’s going to grow the customer base. Every year the number of design starts is increasing. This isn’t just big companies. Not all designs are done at 7nm. There are designs being done throughout a range of nodes, and there are ways people are getting money together to do those 7nm designs.

SE: Will EDA tools we have today suffice, or will there be some discontinuities coming up?

Rhines: This is one of our major discontinuities. These only occur every few decades. That’s an abstraction. I remember when we went from schematic capture to RTL. That was difficult. The new college graduates and less experienced engineers jumped on HDL overnight and started doing their designs. The people in their 30s with 10 to 15 years got it, but they didn’t get it as quickly. But for those over 40, a lot of them never made the transition. We’ve been a long time at that level of abstraction. Basically everything today is done at RTL. As complexity increases, you have to add abstractions. We will go to high-level synthesis. The datapaths and all of these AI algorithms will be implemented in designs using high-level design. There have been products in the market since 1993, but what’s changed is the whole infrastructure is now there. You can do the design, the verification—everything at the level of C and C++. Now the question is who will use it, and it appears to be just like the last shift. It’s the new college graduates, the Googles and Facebooks and Amazons. Those people have already adopted it. The core functionality for data processing is being written in C and synthesized and verified at a high level. They’re looking at alternatives and power dissipation, and they’re doing tradeoffs of power, performance and area—all at the next level of abstraction. You’ll want a mix, so you’ll use the appropriate abstraction for what you’re doing. The control logic will tend to stay where it is. But RTL designers are finding that by going to C they can do a lot of the tradeoffs in the design early. This will be the next abstraction wave.

SE: Why has this taken so long in hardware? In software, there are languages like Python that raise the abstraction level. Is hardware just that much more complicated?

Rhines: There are two reasons. One is that the whole infrastructure has to move, and for design that infrastructure is quite large. You have IP, libraries, and experience. Experienced people don’t want to start over. You can shift from one computer language to another to write software more easily than you can get a design team to move its whole infrastructure, recreate its people, and move its experience base. The other part is that nobody likes change. They’re busy enough without learning a whole new methodology. In EDA the thinking is that, ‘If I can just do one more design at the old methodology, I’ll do it.’ You eventually get to the point where the complexity grows too much, or the power analysis is too great, where you have to make the change. But abstraction changes are more difficult than adding power analysis to your flow or analyzing for electromigration.

SE: How will AI and machine learning be used inside of EDA?

Rhines: There are dozens, maybe hundreds of projects around the industry that use machine learning to improve the performance of EDA tools. We have them all over the company. Solido uses machine learning to where you get 99% of the information from a dramatically smaller set of simulation. So you don’t need millions of simulations to characterize a particular library or piece of IP. The savings comes from the algorithms improving themselves. Calibre is looking at large databases over time. For resolution enhancement, the entire rule set is applied to a design, and then you look at the optimal way to develop the actual code to set the resolution. All the EDA companies are working on this.

SE: With AI and machine learning, you’re getting a distribution rather than a fixed answer. If you’re working within a nanometer or two, is that good enough? And what happens when the device actually learns something unique?

Rhines: If I verify this product and I sell it, and it changes its behavior, it’s no longer verified. What do we do about that? What does a car manufacturer do? Those problems have to be addressed, and they are being addressed. We introduced a product called Tessent Mission Mode that lets you design into your integrated circuit the ability to dynamically test any subsystem that is JTAG compatible. So when your chip is not doing anything, it’s self-testing itself against a set of criteria that the system manufacturer has established so that it can have a dynamic lifetime of self-test. You’ll see the same thing evolving over time as you get more and more neural networks and apply machine learning to chips, boards and systems. Then you’ll get more ways to verify that they haven’t modified themselves into a space that could be dangerous or non-functional.



Leave a Reply


(Note: This name will be displayed publicly)