Applying big data techniques and machine learning to EDA and system-level design.
John Lee, general manager and vice president of Ansys—and the former CEO of data analytics firm Gear Design Solutions, which Ansys acquired in September—sat down with Semiconductor Engineering to talk about how big data techniques can be used in semiconductor and system design. What follows are excerpts of that conversation.
SE: What’s your goal now that Gear has been acquired by Ansys?
Lee: We have a singular mission, which is to provide multi-physics-based simulation of any product. We simulate power integrity. Power is related to thermal. How hot the device is impacts power, and that impacts timing. The power and the timing and the switching on the chip also attracts the electromagnetic interference. And the heating affects the package of the board of the system it goes into. If you’re designing the next-best smart phone, you care about the silicon. You care about the power integrity. But multi-physics means how that phone performs as a mechanical device you hold in your hand, how it connects up to the Internet, and the computation on there requires thermal modeling, mechanical modeling—fluid dynamics because of the cooling system, mechanical because of the package on the board—and then all the electronics on there for simulating the physics of the chips. We’re far from that vision. Rather than going to physical test of the device where you predict something by simulation, whether that’s a phone or the Tesla Hyperloop or the next airliner out of Boeing, the issue is how much of that we can simulate going forward.
SE: So that’s big picture as well as everything down to 10nm and beyond, right?
Lee: Yes. We have to focus on everything from 3D meshing and matrix solutions and high-performance computing. If you look at this from the EDA side, we’re taking the best computer scientists on analog and algorithms and using them to add efficiency to place and route and timing. If you go back and look at our mission statement, in order to really compute these multi-physics at the scale of the devices that our customers are building—IoT devices, autonomous systems, equipment that goes into data centers—you probably want to have 10,000 to 100,000 machines processing simultaneously. The amount of data being generated by that processing is immense. On a single chip you have 4 billion instances or 20 billion transistors or 200 billion shapes. This is a big data problem. Just on a single chip we have more transistors than there are people in the world. Each transistor is represented by lots of shapes, and each of these shapes has temperature, voltage and power associated with it. So how do we leverage all of those great computer science ideas that are outside of EDA into a platform that can accelerate this vision? And how do we use the techniques that make Facebook or Google these buttery smooth commercial-facing apps, which are all based off big data systems.
SE: What kind of platform are you looking at here?
Lee: A purpose-built, big data platform for scientific computation—physics-based simulation—is the best way for us to build amazing EDA tools. Big data helps us answer questions about search. If you look at what Google does, they never knew you were going to ask a question but they made a lot of data available.
SE: Through pre-fetch schemes?
Lee: Yes. This is where we don’t have to re-invent the wheel. There are techniques and algorithms already in existence. There are billion-dollar startups out there like MapR Technologies, Cloudera and Hortonworks dedicated to the computer science on this. You can download Hadoop open source. Doing search on key value pairs or across log files or contextual information is very different than telling me which critical net in my chip is near a sensitive area that might be switching at 10 nanoseconds for which there is a high IR drop problem. We want to do something similar to that. We’ve been building out that system. If you look at the big data systems out there, they’re really challenged for compute. They’re very good at certain things like MapReduce. But if you want to start doing matrix computation on that or graph computation or geometry computation, that whole set of new billion-dollar startups is innovating on that.
SE: So where does EDA fit in?
Lee: We know how to do static timing analysis. That’s graph-based analysis. We know how to build DRC and RC extraction tools. That’s geometry-based analysis. And we know how to do SPICE simulation, FastSPICE simulation, IR drop simulation. That’s all matrix. EDA comes down to these essential services of matrix and graph and geometry. What if you could put that on top of a big data system?
SE: Isn’t that what emulation is doing for certain problems?
Lee: In EDA we’ve always been excellent at building specialized solutions. This is dedicated hardware to speed up verification, which is why you see demand for those products growing rapidly. If you look outside of EDA, the computer science student 40 years ago knew punch cards and assembly language. Today most EDA developers know C++. At companies like Facebook, they know how to program at a much higher level. They’re using Java or Python, and they’re putting together massive new applications by utilizing systems and stacks of software. The premise of this platform is that by giving developers and designers higher-level tools that are not specific to a single problem, this platform can allow us to scale to bigger and better things.
SE: If you raise the abstraction you can do much more. But you also run into problems with that, right? Security is one example. If you don’t understand the intricacies of the system your working with, you potentially can overlook vulnerabilities.
Lee: That’s a good point. If you’re a high-level programmer, you don’t understand the vulnerabilities. There is a challenge. So as much as we can re-use from open source, where there are 10,000 developers looking at it over a course of years, that’s a benefit. And then you have to strongly audit systems. We have all these C++ mechanics. But how are you going to simulate the world? Google’s approach is you take low-cost Linux or PCs, put as many as you can into your data center, and then you use distributed data and distributed compute to do that. They don’t have every developer worried about how you distribute this to here and this to there, which is what we do in EDA. They provide that system and all the developers are on top of that. You want to provide the level of services that allow you to utilize massive distribution. You cut up the data and then you figure out how to handle communication between machines, scheduling of processing between machines. We believe that having a formal system based on the best computing systems outside of EDA is going to allow us to scale new applications and services much better and much faster.
SE: How much does this rely on parallel processing?
Lee: It’s focused on that, but using a different approach. So you may focus on the best timing tool or the best place-and-route tool or the best synthesis tool, and then you come up with the algorithm and then figure out how you’re going to parallelize that. The parallelization used in one tool versus another is very different. And there are good reasons why those are different. But what if you had a platform where all of that was done for you. With Google search you can ask any question and it scales instantly and comes back to you. If you look at Hadoop and other systems outside of EDA, they have these machine-learning stats on top of them. Once you can process lots of data, then you can put these machine learning algorithms on top of them. You don’t have to re-implement all of these known techniques. It will work for timing, power, thermal, mechanical—all of these different apps will be able to leverage this machine learning.
SE: Can you do more tradeoffs and analysis using this approach?
Lee: Keep in mind we’re three years into this journey. It will take many more years to do. But our first focus was to give great visibility—great search tools and great maps tools—to chip designers. Access to data is so important to designers. We have intuitions and thoughts that can do better than most automated software. You can’t scale it, though. So the first challenge was to give designers a way to look at their data in the way we can as consumers. All chip data should be instantly accessible, no matter whether the data comes from Mentor Graphics,Synopsys orCadence. It’s all interoperable and searchable. The next step, now that a person can go on and begin searching, is to make that all useful. It’s really amazing what engineers can do even with the crudest tools. We figure out ways of setting things up and building things. These designers have great tools, but they’re not perfect. So what if you give them a big data system where they can not only search, but also start building flows and optimizations. ‘I know when I did things this way and routed things this way that I got a great result.’ So what if you gave them a high-level way of driving that and dictating that? That’s capturing designer knowledge and letting experts build really awesome flows for them to do things they can’t do today.
SE: So you’re looking at more efficient and faster systems, but also faster ways to build them, right?
Lee: Right. A lot of guard-banding, or overdesign of products, is done because you don’t know something. Competitively you see that when you uncork a chip from one semiconductor versus another, there are differences in how they do the margining. That affects profitability, performance and time to market. So what if you didn’t have to add margin? You would make better choices. One benefit we see is a reduction in overdesign. In practical terms that means you might choose a cheaper package, use less power, have less variability in the product you deliver, or you might have more predictability in your product schedule. Maybe you slipped six months because you found a problem. Those are some of the benefits we see coming out of it.
SE: Is the end game to sell the database, or the knowledge, like IBM’s Watson?
Lee: Today we sell tools. There is opportunity. Data has more value than compute. The machine that cranks out the compute is essential, but for the end user it’s the knowledge and what they can do with the knowledge that matters. We don’t know how or where this will take is in terms of business strategy. But any opportunity that more closely aligns with the customer’s end goal is going to be a win-win. As they benefit, we benefit. But the real mission is our passion to give Google-like tools to designers so they can do multi-physics-based simulation of their products. Power, thermal and timing are now different engines and different products with different files, different looks and feels, and different R&D teams.
SE: So step No. 1 is to unify all of this?
Lee: At least to provide a framework. We need a good platform where we can slot these things in. That’s going to be a tremendous start. This is a long-term vision, but as soon as we can put all of these physics-based engines into a single platform, whether you’re using 1,000 GPUs or 1 million custom ASICs, all of that is handled by the platform. This may be revolutionary for EDA, but it’s not revolutionary for all of the processing done outside of semiconductors. This is the democratization of simulation.