Executive Insight: Aart de Geus

A very candid conversation with Synopsys’ chairman about automotive technology, machine learning, software vs. hardware, and the future of EDA.

popularity

Aart de Geus, chairman and co-CEO of Synopsys, sat down with Semiconductor Engineering to discuss machine learning and big data, the race toward autonomous vehicles, systems vs. chips, software vs. hardware, and the future of EDA. What follows are excerpts of that conversation.

SE: The whole tech world is buzzing over data and how it gets used in areas such as machine learning and AI. Where is the upside for the semiconductor industry?

de Geus: The opportunity space here is interesting because you can see the race for new algorithms, and for new architectures to support the new algorithms. If the potential of those is broad enough, there’s no reason people will not create dedicated chips for a specific application. That, of course, is great for our industry because it means a lot of designs need optimizing for new characteristics again.


Photo credit: Paul Cohen, ESD Alliance

SE: So who wins and who gets left behind?

de Geus: ‘Left behind’ is only relevant if you’re not seeing the natural evolution. This industry will not be left behind. We’re in the midst of this and understand this because it’s our bread and butter. Many people in our industry have seen this coming for a long time. The challenge is always that you can see strategy and directions very easily. Pinpointing the time when something matters is actually quite difficult.

SE: Such as the right year?

de Geus: Yes, or sometimes the right decade. The end application is where the money comes from. A lot of industries have figured out that their business, product and business models could be impacted by a different utilization of the data that is somehow attached to their devices or their business models. If you can harness that in a way that finds shortcuts and efficiencies, or just completely different ways of going about business, that is high impact.

SE: We’re starting to see that with companies like Amazon, IBM, Google, where they are developing their own architectures specifically for harnessing and processing that data very quickly.

de Geus: All the people in the processing world are listening very carefully to what will be the needs, or trying to predict what the needs are. Or even one step further, they’re trying to get into the path of that data so they’re closer to where the money ultimately is made.

SE: Where does EDA fit into this picture?

de Geus: If you think traditional EDA, you would say that EDA has been the other half of . Also, top credit goes to the technology and manufacturing people because they have driven down physics in ways which, over and over, have been described as impossible. Every seven-year cycle we hear Moore’s Law is dead, but it is still very much alive—except that it’s morphed. It’s still about exponential delivery of functionality and performance. That has continued, absolutely. And as it continues, our industry has evolved to become more oriented toward the systems and understanding more about the relationship between software, architectures, and implementation, while still driving down into the physics like there is no tomorrow. There’s a reason why a few years ago Synopsys changed its tagline under the logo to, ‘From Silicon To Software.” But it could have also been, ‘From Physics To Function.’ It’s a continuum, and the very fact that this continuum now suddenly is clicking in—because the intersection between hardware and software is materially relevant—should be very encouraging. We are as well trained as anyone as an industry to be part of that.

SE: The semiconductor industry has become very good at shrinking chips and solving all the power issues and physical effects that crop up. What else is required?

de Geus: As we move closer to the application, we also get closer to different segment verticals. Each profession has its own language, its own needs, its own time constant, and so on. You can see a number of system companies becoming very interested in this space because that’s their natural home. You also can see a number of those companies—such as car companies, which make really sophisticated systems—being massively disrupted if the electronics, the machine learning, the data, impacts their business model opportunities. All these people are simultaneously looking at what they should do. Our industry is well placed as a layer of the solution to those questions. Whenever there is a disruption, the first phase of the thinking process is always an opening process, and then you get to a closure. That first phase feels very much like chaos—and what we are talking about now is systemic chaos. Some people already see a great opportunity at the end, a pot of gold at the end of the rainbow. It may just be shiny metal right now, but there will be pots of gold there. That movement has fully started.

SE: This is going on in a lot of industries right now. But when you think about automotive, who owns the car in the future? Is it the car companies, the infotainment system folks, is it Google or Apple, or some service that runs all this and does it differently than what we’ve been doing?

de Geus: It’s interesting to see how the messaging has accelerated really far into broad applications. I met with the mayor of San Jose and other executives who were talking about what we do with traffic, the city, and more. Of course, they already are dreaming about, ‘Should we have autonomous driving lanes so that we can deal with traffic?’ You need the lanes because autonomous cars and human-driven cars don’t mix. It’s like cars and cattle. Human-driven cars are the danger. But all of this forward thinking is already happening, along with Uber and so on. Meanwhile, there’s a lot of technical challenges to actually make sure a car stays on the road.

SE: It sound as if those technical challenges will be around for awhile. For example, how does an autonomous car exit the road if it’s blocked in by two other lanes that have human drivers?

de Geus: The fun part is when two autonomous cars discover the same parking spot at the same time. Now there is a psychological conflict management software needed. I love the theme of autonomous driving because it’s such a visual representation of the entire high-tech industry. The economics around it are indicative of where we are heading. But many of the benefits of autonomous driving show up long before a car is autonomous. I talked to a key executive from one of the automotive companies. I can’t vouch that what he told me is exactly correct, but he said that for the car insurance industry the biggest cost is not a fatality. It’s when you’re backing out of a parking spot and you have $4,000 worth of damage on both rubber bumpers that are full of electronics and must be replaced. We all see this happening and know that it’s completely avoidable. Those are benefits along the way to autonomous driving that have massive economic impact. The long-term vision is well on its way. But having had the opportunity to drive an autonomous car on the highway, I can say there were moments where it was ‘interesting’.

SE: Define interesting.

de Geus: At one point the test driver innocuously grabbed the steering wheel and thought I hadn’t noticed. But I did notice.

SE: In the past everyone predicted we’d have generic hardware and all the functionality would be in the software. We seem to be shifting back. It’s not that software isn’t relevant. But it’s not necessarily the only thing that’s going to determine system functionality because it takes a performance hit and it’s much less efficient in terms of power. How does that change the dynamics? Do we even have enough hardware engineers?

de Geus: It reminds me of , which was sort of this back and forth between generalization and specialization. Invariably it’s always driven by the same thing. Specialization is if you can get at least a temporary differentiation in something that is relevant, such as performance or low power. Then you try generalization because the economics are leveraged. We are absolutely going through a phase like that right now. It will be specialization and computation for all these algorithms. I don’t think there’s any shortage whatsoever of software engineers—except maybe in the areas that are hot. Hot and cold does change pretty quickly. So there is an enormous amount of excitement about machine learning. It has all the characteristics of a bubble—a lot of investments, some of which will go nowhere. Some of these investments are really an indication of the dynamism of the industry and the phase of research. But to answer your question, in our industry we are mostly software engineers. We now need more and more Ph.D.s who understand the relationship between multiple domains. That’s another way of saying we’ve moved from a Moore’s Law skill complexity to systemic complexity. To me that’s super interesting because that leverages the varied skillset that the EDA and IP industry has.

SE: Are tools that exist today geared toward that kind systemic complexity?

de Geus: One of the credits that the EDA industry deserves is that we’ve moved from a relatively simple understanding of the chip, as in functionality and area, to then swiftly adding performance, reliability and increasing forms of security. These are all additional dimensions, and there is a notion that systemic complexity already has been in existence in our field for a long time. It’s been very deterministic algorithms—linear thinking—and now systemic complexity is finding a whole new realm. If you have oodles of data coming from all sorts of sources, can you find additional correlations that permit predictive analytics?

SE: Part of that is what’s driving the IoT and IIoT. How do we develop chips for those markets given that we don’t necessarily know how they are going to be connected, where the processing needs to be done—whether it’s done at the sensor level, or the edge of the network, or the cloud. We have not created architectures to run that seamlessly. Can the tools that we have today even understand all the pieces?

de Geus: I’ll be more positive. We know the answers to all of these things. We’re not quite precise about some of it, but we know the answers very well. It’s the same for, ‘Do you need local computation or do you need a mainframe? How much happens in the cloud versus at your desk? What do you do about the display, and so on?’ The same is true for IoT, meaning you can generate an enormous amount of data. The pathway of the data limits how much you pass through to some other computational entity. If you don’t have reliability through your data pathway, then you need local computation. Cars are a good example of this. If you rely on the car computation to be in the cloud, you have no chance of having a safe or secure car. That doesn’t mean there’s not a lot of computation that can be done and provided to a car coming from the car that just passed the corner. Around the corner there is a dog on the street and that information is theoretically available and could be transferred from the previous car to the next one, or via some other source. There are a lot of people working on that. The mathematics of computation versus transfer of data are well understood. The rate of change and the number of new opportunities are so high that each individual case makes it difficult to answer. All of these individual cases ultimately will settle on techonomic needs, meaning what is really necessary and what is the most economical way to balance this. Our industries are unbelievably adept at figuring it out very quickly—including making a bunch of mistakes.

SE: One of the big issues we wrestle with all the time these days is security. Everybody has had a least one of their credit cards breached. How do we solve this problem?

de Geus: There’s no easy answer. It’s yet another technical problem, except there is a big difference with this one because it has massive intelligence behind it to make it worse. When we dealt with cross-capacitance for the first time, there weren’t any folks saying, ‘Let’s make it worse.’ At some point in time, we nailed the physical problem. This is different. There are equally good people on the other side, some for criminal reasons, all the way to extremely disruptive people actively working on this. The problem will continue to evolve. What is clear is that remedial action is completely insufficient. We will need to put security in the same place as safety, and we will need to be increasingly correct by construction on everything we already know. We’ve invested now for a few years in the whole notion of quality and security of software. Research was published recently on how much open software there is in old and in existing products—old meaning more than three or four years. The reason that is relevant is because there are no repositories of known security issues with open software. Of course, open software is very powerful and useful. If you have open software in your products and you don’t know that a bunch of vulnerabilities were discovered, the product is vulnerable. This begs the question of who is responsible? The lifecycle is impacted by the past, and remedial solutions are not sufficient. Therefore, correct by construction—or very proactively designing in security—will be a necessity. It applies as much to the hardware as to the software, but the software of course is the main source of vulnerabilities today.

SE: There’s a crossover here to not only security and safety, but also reliability. If your system breaks down and you update it with something that has bad code, you can affect both of those. We have to start building this in upfront on a lot of these systems, right? Do we have the tools and know-how to do that?

de Geus: This brings us straight back to the most visually complete and appealing example of the whole digital age—autonomous cars. The car industry has an unbelievable pedigree of dealing with safety, and therefore partially reliability, because the safety is not about when you sell the car, but over the lifetime of the car. There are many standards developed to try and orchestrate a methodology that is more redundant and more self-diagnosing of issues, and which in general guarantees a certain degree of safety that is much higher than if you didn’t do that. The auto industry has discovered the weakest link is the software. An advanced car has 100 million lines of code, and it’s all pretty much connected to everything else. The hacking examples we have seen have been quite amazing in their creativity. Here’s an industry that banked on safety as a cornerstone of its existence. At the same time, software is considered one of the great opportunities in cars. But it’s also the single biggest danger zone for maintaining that pedigree. So it is chaos and opportunity at the same time.

SE: We tend to think of the biggest security risks as being at the seams of technologies, where the pieces go together. How do you close those up—and close them up for the future? And how do you guarantee that for 7nm technology when there is no history of these chips in the market?

de Geus: It’s a great question because this is one of the outcomes of going from scaled complexity to systemic complexity. Systemic complexity typically means that you’re relying on expertise of various groups of people, companies, or disciplines. By definition, when you put two things together, those things tend to have been under more control because they are designed by a group that takes responsibility for it. If the integration is designed by a third group, that may not be as good. So now you need to precisely figure out where the intersections may be vulnerable. By definition, the intersections are a simplification of the physical reality of the system, so the modeling of it will be increasingly important, as well. That immediately leads to the intersection of hardware and software. One of the opportunities the EDA field is facing right now is that the center of gravity has moved to the intersection of hardware and software. It’s interesting because it means that skill sets on both sides can multiply each other. We often think that success is the sum of our efforts. It’s not. It’s the product of our efforts. A single zero and everyone gets zero. The more you have systemic complexity, the more you have multiple teams, multiple actors, and you need to count on every one of them being successful. Systemic collaboration is at the heart of the success model going forward for these types of issues.

SE: Can we use the tooling that exist today for things like debugging and verification and apply it to security?

de Geus: There are more and more capabilities that will be applied to that area. In all fairness, the silicon space has had a fairly high degree of diligence on verification for the simple reason that you wouldn’t dream of sending something to manufacturing if it hadn’t been vetted pretty thoroughly. The cost of just the mask set is substantial. In software—not in all fields but in some fields—the penalties have been a lot lighter. If something doesn’t work, you send out a patch, and another if that doesn’t quite work. All of us are getting patches all the time. With software going exponential and the systemic nature of the software being multiplicative, we’re facing more and more issues that are not solvable that way. Some of the rigorous techniques coming from hardware verification will gradually find more application in software, as well.

SE: Looking at this differently, there is a continuum in terms of having these capabilities within chips, from the PC all the way into the IoT. The flipside is that it becomes a lot harder to develop these chips. There are a lot more influences and interactions than we’ve ever dealt with in the past. So what do the EDA and IP industries look like going out several years?

de Geus: In many ways the conceptual changes are continuations. We keep driving the notion of integration as the most techonomically effective way of balancing the need for high-data-rate exchanges with a cost base that is manageable through the cost base of silicon. There are a number of situations where that becomes more difficult, especially if you start interacting with the reality of physics. The sensor technology tends to prefer different silicon capabilities than the advanced logic computation or storage. But even there, there are advances that try to bring these things continually together. Whenever that is not possible, you get other forms of integration. The oldest of those is a printed circuit board. If you look at a PCB, you can see a continuation of shrinking to silicon interposers, to a variety of mechanisms, maybe stacking of chips. They all have great opportunities with precisely the same fragility that you alluded to when you have breaks between domains. It sounds great. ‘Why don’t we stack some chips on top of each other?’ Except you really hope there’s not one of those in the middle that is really a thermal generator, because the other chips may not be designed for that. It’s compact, but it brings up all the thermal questions. How do you deal with that? You can say that’s hard or you can say it is wonderful for our industry because the benefits of doing it will be highly leveraged if we can solve some of those issues. The natural way of balancing between them is literally techonomics. Once technology is sufficient to solve something, what are the economics governing its benefits?

Related Stories
Executive Insight: Aart De Geus (2016)
Synopsys’ chairman looks at the biggest changes in the industry and how they will affect technology for the foreseeable future.
The Rising Value Of Data
Race begins to figure out what else can be done with data. But not all data is useful, and some of it is faulty.
Using Machine Learning In EDA
This approach can make designs better and less expensive, but it will require a huge amount of work and more sharing of data.



Leave a Reply


(Note: This name will be displayed publicly)