Executive Insight: Aart de Geus

Synopsys’ chairman looks at the biggest changes in the industry and how they will affect technology for the foreseeable future.

popularity

Aart de Geus, chairman and co-CEO of Synopsys, sat down with Semiconductor Engineering to discuss Moore’s Law, the IoT, inflection points and how chip design will evolve in coming years.

SE: We are in the middle of possibly one of the biggest transition points we’ve ever seen in this industry. How do you envision things shaking out?

De Geus: There is no question that there is an enormous inflection. Or is it a continuation of the same exponential evolution of technology and its impact on mankind? I always like to look at the semiconductor industry as so far only having two major phases. The first one is computation and the killer app was the PC. After that it continued with the internet, with servers, and with the cloud. Put that all together as computation. And with big data, computation will continue massively. But it has fundamentally flattened in terms of its economics. The second one, after a fairly major semiconductor break in terms of growth rates, was mobility. The killer app clearly was the smartphone. The smartphone was the platform for enormous numbers of applications. Both of these phases have had big growth, amazing pushes on technology, and have been the platform for many other people to have impact, mostly the software world. These phases, while not ending, have just flattened, somewhat.

4R1A5020You opened with technology changes, but a much more interesting change is how the relationship between the hardware platform and the software is changing toward an opportunity—for the first time—to truly look at digital intelligence as now becoming practical, possible, cost-effective, and clearly as a big transformation as we’ve seen in the first two phases. The whole computational age brought productivity change for mankind. The mobility age brought globalization to communication, and access to data and information in a way that transforms everything. Digital intelligence will transform everything again. Why does it not feel like it is up to the right yet? Because multiple things are coming together at the same time, but are not here yet in large volumes from a semiconductor perspective.

SE: We characterize this as the internet of things, which is where you are going with some of this, right? It’s really a number of very discrete markets that are developing at their own pace.

De Geus: I actually don’t like to characterize it as the IoT. While the IoT is unbelievably promising in terms of connecting everything in the world to the internet in some way, it is ridiculously small silicon. Now, it does create an enormous amount of data. The question is, what do you do with it? There are two ways to do something with it. One way is with big data, so the computational angle, the cloud. The other way is local utilization of the data and some degree of smarts. I purposely don’t use the term ‘artificial intelligence’ because that relies on, ‘Well, we are going to be like the human.’ There are some pieces that are like the human, but there are some pieces that are not at all like the human. Take the simple example that humans lack of X-ray vision. The fact is that cars do have infrared vision. The digital intelligence will be broader and narrower than the human, but very fundamental. The automotive industry is very interesting because it is a slow-moving industry that is now suddenly running at turbo speed in adopting smarts in a fashion that is unbelievable. Cars are already actually driving around with already fewer accidents than humans. That is pretty cool.

SE: Backing up a bit, when you are talking about IoT, the initial version was dumb little devices scattered around. What is showing up as reality is much more purpose-built, much more sophisticated devices for processing at the edge. It’s just too expensive to send this data, right?

De Geus: I think that is the positive interpretation. The opposite of dumb means more silicon, faster silicon and also low-power silicon, as it still needs to be close to the actual sensor device. And the minute there are a couple of killer apps, this thing will be moving very fast. It is also clear we are looking at a few more years before the economics are felt in the semiconductor industry.

SE: We are collecting and managing huge quantities of data that we’ve never managed before. Is that one of the big changes that is coming?

De Geus: This is where we need to think exponentials. It is true that we’ve never seen so much data, except that same statement has been true ever since the beginning of Moore’s Law. The essence of Moore’s Law is exponential change. As a human we can understand it, but we don’t feel it. Everything we feel tends to be more linear. If you suddenly see twice as much data or 10 times as much data, that feels like a lot. Maybe the astounding part is that, notwithstanding many predictions, another 10X is still possible. I’d argue it is still possible a few more times.

SE: Let’s drill down into Moore’s Law. How much life does it have left in terms of the number of companies that are pushing down to the next nodes, as well as the sheer cost of dealing with physics at those nodes.

De Geus: The end of Moore’s Law has been predicted many many times, and it has found ways to circumvent death in ways that are completely inconceivable just a few years earlier. The first massive breakthrough was when it became possible to do photolithography below the wavelength of light. That had been unimaginable because with photolithography, the wavelength of light determines it. Yet, we do this massively. EUV is just a reprieve from multi-patterning, but I would argue that seven to nine years ago it was completely proven that finFETs would never work. They were too expensive. But here we are 16/14/10nm, with a massive push on 7 and development on 5 in the works. From a production point of view, I have no doubt we have another 10 years that we can already see today.

SE: The time between nodes seems to be slowing a bit though. There are more things that you have to do with each node that you didn’t have to deal in the past.

De Geus: Let’s not forget that the definition of a node is a marketing event. If you say node and you don’t say yield, you’ve stated half of the description. There is no question that the complexity of what is being done, and therefore the investment around that, has grown with the complexity or smaller size of the node.

SE: People are doing a lot more with established nodes, as well. Even at 28nm there are multiple flavors of that node, in terms of process, new materials coming in, FD-SOI, new ways of architecting some of the packaging on this. This is all lots of different pieces and ways to go. It’s not just a straight linear path that we’ve been following.

De Geus: That is the interesting part of ‘techonomics,’ which is that any technology has its own economic cycle. A lot of people started to bank on 28nm, partially out of reasonable fear that finFET would be pretty wobbly for a while, and pushed on 28nm in many different dimensions—all the way to having specialty areas that may apply to automotive and other emerging markets. Squeezing out the maximum of a node is a good thing. Driving the state of the art is a good thing. The balance between the two is economics.

SE: Hardware engineers have to deal with software a lot more these days. Software now has to be done at the time the chip is done, and is probably developed internally. The new people that are coming into the market now are learning different things like Python, Java, and it’s not the same way of looking at a chip. Does the software now drive the hardware, or will it in the future?

De Geus: That’s a super interesting question. In the first generations, once you had basic generic hardware, a lot of people figured out how to do something in software. There’s a lot of folks pushing on advanced nodes in remarkable ways as we speak. As a side note, it’s remarkable how the automotive industry has discovered that finFETs are viable transistors. They are putting a lot of effort in a variety of data processors, all aimed at driving the car better. The push will come from both sides, but the intersection of hardware and software is going to be at a premium in terms of how well you do it and how well you assure the notion of yield sort of applies to software, too. Except yield is not volume-dependent. It’s directly related to the number of lines of code that you have.

SE: What about hardware and software co-design. Are we actually getting there?

De Geus: We are absolutely getting there. We can see that in our verification business, because verification capabilities in the last three to four years have made unbelievable steps forward, mostly because many different techniques that were at different levels of abstraction have come together in a much more integrated platform-based way. For example, you can compile directly into simulator or the emulator or into the virtual-prototyping situation, combine those, and now bring up software on hardware that you don’t have. This is perfect as it allows the software folks to move forward while the hardware folks are slaving over their part, and hopefully get everything ready at the same time. The ability to mimic the hardware/software system has advanced by leaps and bounds. At the same time, the amount of code has grown non-linearly, and of course with that comes all the quality control challenges.

SE: Quality control is one of the big issues here. It’s not as much of a problem with smartphones. But it’s a whole different problem when we are dealing with automotive embedded code where it has to last for 10-15 years.

De Geus: You are putting your finger on something that is very important, which is many of these systems are close to the personal human destiny. A car is a very different device than a cell phone in terms of the sheer number of regulations that apply to making it viable. The automotive industry is a perfect example of an industry that for 30-plus years has legislated itself in terms of the number of standards that you had to meet. Reliability is an important one, but is one of the later ones compared to first making sure you have mechanisms to find and deal with faults in the chips. You are familiar with ISO 26262 and zillion others that we have to live up to, and it is only recently that this is suddenly yielded a number of alerts in the software world. We should also add security. A Jeep was hacked last summer despite the fact that automotive has been unbelievably safe from a technology point of view. Security became the weak link in safety. That is a continuum of engineering. Therefore, these same techniques need to apply.

SE: One of those errors can be hacking vulnerabilities in the software? How can we determine whether it is hardware or software that is causing the problem?

De Geus: Everything we do in high tech has long been a team sport. Great companies try to find ways that don’t result in finger pointing as the mechanism to either diagnose failures or do something about it. Having said that, there is more and more software. The ratio of software to hardware engineers in semiconductor companies is easily three to one today in favor of software engineers. The semiconductor folks always say, ‘It is hard to get money for it.’ The fact is they are selling functionality. Therefore, you own some of the responsibility to ensure you deliver against the spec that you advertised. It is also true that, especially with security, there is another angle. In all the other situations you are fighting with physics and good design, at least you are in control of your destiny. With hackers, you have proactive forces that push back and try to find holes. They use the holes, in some cases, for really bad intent. That is a dimension of an ongoing battle that is not going away quickly and will demand a high degree of being on the ball to continually deal with it.

SE: Are the tools that exist today capable of dealing with these problems? Is it a methodology issue or a question of new tooling?

De Geus. Yes, yes, and yes. These capabilities can find an enormous number of things. Methodology is mostly the part that orchestrates human discipline. Ultimately, a good methodology requires less discipline because it automates it, but discipline is needed to put methodology in place. This is evolving rapidly against a backdrop where the intrusions and creativity of breaking something can be negative in its impact.

SE: Synopsys bought a couple of high profile software companies. Were there any suprises?

De Geus: We are still at the early stage of structuring those methodologies for software development. In some ways it’s strange, in some ways not, because it can be explained back to economics. In early days of Synopsys, Carnegie Mellon had ratings of good software development and that should have continued for 30 years. Economics pushed that back. If you look at the difference between checking out hardware versus software, we all use the term ‘sign-off’ with very clear understanding that it is the moment that we give it to a foundry. In software, if there is an issue, you ship out a patch. If the patch creates a problem, you ship out another patch. That’s why we are getting patches every other day. If it is not a life-critical thing you may live with it, as the economics are beneficial. That is now completely changing because it is starting to touch more life-essential devices. The other thing is that it is starting to touch things that touch things that touch things that touch life-essential devices. The notion of internet of everything says that things are no longer in isolation. I used to make the joke a number of years ago that people who have installed these systems at home need to watch out because sooner or later the perp would come in through the toaster. It was a joke. Now you read up on the Target break-in, the perp came in through the air-conditioning system. With the Jeep break-in, the perp came in through the radio, an innocuous device.

SE: When you are dealing with verification of software, you are dealing with coverage metrics. But there are a couple key differences. First, you have more users. And second, they change over time as capabilities change.

De Geus: Yes, and actually that is going to be one of the most difficult problems. For argument sake, even if we have a methodology that truly should capture all security issues, after a product is shipped a new hack may be discovered. We have a capability today that allows us to help our customers find the fingerprint of open source software in binary code. There is a registry of open source software with the vulnerabilities, which is updated all the time. If you are diligent and ship your product and then a new one is discovered, we can inform you. Do you want to know? Do you want your customer know? These are moving targets. That will bring about a set of interesting challenges of how we deal with it.

SE: Like a software download to your pacemaker?

De Geus: A pacemaker is a good example. I remember a large company in my earlier days that had acquired a small pacemaker company. They divested it because the large company could not take the insurance risk of the pacemaker killing someone. We have the privilege of being in the middle of an industry that changes the very nature of mankind and its opportunity space. Let’s embrace the challenges, ethical and otherwise, that come with it. That doesn’t mean we have the answers, but this is among the brightest audiences in the world, so we should deal with it.

SE: We are now looking at markets that really do start tracking reliability over time. How do we measure this as an industry in evaluating this to last for a fully integrated SoC for 18 years in a life-critical device?

De Geus: This is one of those areas where IoT or sensor devices bring great opportunities. For example, you can analyze the sound of a mechanical device and understand that if there is a new frequency, you know it is going to break within a week. That’s preventive maintenance and it’s a form of reliability. And let’s say there is a device you have on your body, like a next-gen Fitbit-type device, that can measure body events and it predicts you are going to have a heart attack or stroke. How much would you pay to predict it a day, a week, or a year ahead of time?

SE: Would they pay more for the EDA tools to prevent it?

De Geus: We are just the enabler. It doesn’t matter as we are a fabulous industry that has the privilege of driving half of Moore’s Law and we should give full credit to the technology and manufacturing. We’ve been able to harness Moore’s Law toward creating devices that our and their customers have put functionality around and changed the destiny of mankind, hopefully for the right things. If we can be players in that as individuals, is its own reward.

SE: You’re bringing up an interesting point. How much more are people willing to pay for a service and gradations of service. If you don’t own your car and you are calling up a car to pick you up, would you pay more if you can get it in a minute versus 10 minutes from now? If you want an extra clean car versus a car where someone may have put gum on the seat, would you pay more for that?

De Geus: That’s a judgment of value. My example was more about destiny. If your life is coming to an end, you have more time to do something about it. That extra minute could mean one more phone call. One day means making it to a hospital. One year means changing your lifestyle. When you can measure physics, in the deep sense of the word, in every which way, ultimately we will be able to model everything. How well we can model it is the question. With massive amounts of computation and measurement, we can model weather. We can model games. This is a combination of understanding and grabbing physics data and doing something more useful with it. This can mean local and mobile smarts all the way to big data analysis.

SE: This is taking what we have learning in EDA and moving it to big system picture. Right? It is no longer just the chip. It now looks at how it applies and interacts with the world.

De Geus: Absolutely. In the intersection of hardware and software, the center of gravity has already moved up. The center of economics has way moved up. The fact is that the Facebooks and Googles are worth way more than the system companies underneath and the chip companies. That is called leverage. Overall, our center of gravity to provide the next massive wave of impact and application capabilities is in front of us in just a few years.



Leave a Reply


(Note: This name will be displayed publicly)