Where ML Works Best

Cadence’s president talks about machine learning inside and outside of EDA tools, and how to measure the benefits.

popularity

Anirudh Devgan, president of Cadence, sat down with Semiconductor Engineering to discuss machine learning inside and outside of EDA tools and how that will affect the future of chip and system design. What follows are excerpts of that discussion.

SE: How do you see the market and use of machine learning shaping up?

Devgan: There are three main areas—machine learning inside, machine learning outside, and machine learning infrastructure. The key for all of these are new, patent-based algorithms. This is the science of three ‘P’s. There is the science of place, which involves geometries and the underlying mathematics. That’s a couple thousand years old. The second ‘P’ is the science of pace, which is like derivatives. That’s calculus. A lot of our recent math is based on calculus and geometry, which is EDA and semiconductor design. The third one is the science of patterns. We always did patterns anyway. The human brain does patterns. There are patterns in the stock market. CNNs and all of this related deep learning is one new way to do pattern resolution, and it has been used very successfully in vision and speech. It’s a powerful new weapon. Pace and place are already successful in a lot of industries. But pattern is new. Along with this, there is definitely an improvement in compute power, but convolution will be very powerful.

SE: How do those pieces fit together?

Devgan: You can’t just do pattern-based by itself. You can’t just do machine learning. It has to be in the context of all three. You still need calculus and geometry. But the effects of those will be profound, and it can go on for hundreds of years. It will be just as profound a change as the others.

SE: We’ve had pattern recognition for a long time, but it hasn’t been effective, right?

Devgan: Correct. Not better than humans. Now we can at least match and exceed that. Now we can take patterns and apply them to our algorithms.

SE: So how does this fit into your three areas for applying machine learning?

Devgan: If you look at ‘ML inside,’ routing was a very disruptive technology when it first came out. Before routing, you would optimize by estimating what would happen. That is better with machine learning. From a user perspective, there’s no change except that it gives better PPA runtime. You’re using a different algorithm, but the user still looks at is as a timing tool or a place-and-route tool.

SE: So this is combining the best of data mining for all the corners, right?

Devgan: We have test cases, and we can learn through patterns better prediction methods. And that can be embedded in these tools, so we get a better result. The analogy is that you have a car that runs at 300 horsepower, and suddenly you have an engine that runs at 330 horsepower. It’s great, but your experience is still the same.

SE: Especially in Silicon Valley traffic.

Devgan: Yes, exactly. So the second part of this, which can be more transformational, is ‘ML outside.’ We have some projects to see if we can improve the way the tools can be used. That is more than the tools themselves. It’s also the design flow. ML inside is improving PPA runtime. ML outside is improving productivity.

SE: Is this for tools addressing chip design, or for addressing deep learning?

Devgan: That is the third area. With ML outside, you want to improve the flow. So you run a bunch of simulations and emulations and formal, and then you look at the coverage and make the next decision. The question is whether some of that can be optimized. It’s moving toward self-driving. It’s not fully autonomous yet, but instead of one person driving one car you can now drive multiple cars. Even in the design process, there are certain tasks we can automate on the flow level. That’s ML Outside, and it will be more transformational than the tools themselves.

SE: How about the third piece?

Devgan: That’s the infrastructure. If you look at machine learning, it’s very mathematical and very close to EDA algorithms that are used for training. A lot of people are spending time on hardware-software co-design for machine-learning chips. There are 50 or so startups in this space. There is a lot we can do to provide tools and methodologies to improve this whole infrastructure development.

SE: Is this a new IP opportunity?

Devgan: We already have Tensilica IP, which can be optimized for that. But when companies are doing these designs, there are some unique requirements. These machine learning chips are huge. The tools have to have the capacity to handle them. There also are some highly repeated structures, so there are PPA requirements. We are working with several of these startups and companies that are doing this infrastructure, regardless of whether they buy IP or develop their own. There is a lot of scope to improve.

SE: How about the algorithms themselves? Some of that development work can be automated and improved.

Devgan: There is a lot of opportunity there. One company I talked with said that one of the big problems they have is not just developing the machine learning algorithm, but verifying that it does what it’s supposed to. So there are design implications and verification implications of the actual machine learning. We will start providing the tools, but over time we will expand beyond that because a lot of the math is very similar.

SE: How do you verify these algorithms and make sure they’re secure?

Devgan: The verification of the algorithm, and debugging and testing, are critical. It’s the same with the design. If you look at the last big thing, which is social media, the EDA industry contributed very little directly. But machine learning is more fundamental, and we can make a huge contribution. It’s more mathematical and aligned with our core strength. This is very exciting. We have a big cross-functional task force. There is a lot of activity in our customer base, too, designing this technology.

SE: If you can shorten the time to market, that potentially can create other issues with machine learning because these algorithms are being updated so frequently.

Devgan: It has to be configurable to some extent. Either the software is configurable, or the cores are configurable. These algorithms are changing fast, and if you do a silicon representation you are stuck with it. If you have a more specific implementation it might be faster, but if the algorithm changes then what do you do with that? Some amount of reconfigurability is critical, whether that’s in the software or the middleware, between something like TensorFlow and silicon, or in the silicon itself with something like Tensilica. This whole hardware-software co-design is going to be a big problem. Right now CPUs and GPUs are general-purpose, and the other solutions are too specific. There has to be a middle ground.

SE: It’s much more efficient to do this in silicon, right? This is why we’re hearing a lot more interest in DSPs and FPGAs.

Devgan: FPGAs is another option. It’s easier to adjust the middleware with FPGAs. Some companies are doing a lot more with FPGAs. This is like the wild, wild West. We were worried about what would happen after mobile and cell phones. And if you talk to data centers, they have projections that the amount of compute dominated by CPUs is going down, but the amount of ML compute is going up dramatically. I’ve seen numbers as high as 40% to 50% in a big data center being devoted to ML compute.

SE: We’ve seen those numbers for cryptocurrency mining, too.

Devgan: Yes, but this is more sustainable. There are many more general-purpose applications, and it opens up huge opportunities everywhere. In our case, we can contribute to that and apply it internally.

SE: As you apply it internally, what’s the goal? Is it faster time to market, a better product, fewer wrong turns? Basically what you’re getting with ML is a distribution, not a fixed number.

Devgan: You still have to apply all of the Three Ps. There’s a way to improve the pattern part of the algorithm. The other pieces have to be good, too. But it is a way to give better quality of results, better PPA and runtime.

SE: These algorithms also don’t have to be 100% accurate, either, right? They can be 95% or 98% accurate, and you get faster time to results.

Devgan: In place-and-route and verification, you always try to estimate early in the flow versus late in the flow. Now there’s a mathematical way to do that, so the result is better and faster.

SE: You also have more data points as time goes on, right?

Devgan: Yes. On top of that, it’s not that difficult for our current employees to pick up machine learning. We have a large engineering team, and a lot of them are trained in these numerical techniques. So we’re able to quickly train our own people to do machine learning. This is a strength of EDA companies.

SE: Is there going to be a way to improve the quality of the algorithms? You mentioned hardware-software co-design. Is there a way to figure out the best tradeoffs for what goes into hardware versus software?

Devgan: For ML inside, it’s possible. It’s like improving the engine. If you add machine learning, you have to ask whether it improves PPA or runtime. We’ve already deployed some of those applications, and the user doesn’t even know there’s ML inside. Those are very measurable. ML outside, where you change the flow, is still to be determined. In that case, you are replacing a more human-oriented task. But ML inside is very measurable, because you’re running the same benchmarks with ML algorithms added. It runs on standard hardware.

SE: What have you found from your own internal use? What’s the upside, and where are the risks?

Devgan: We publicly say 5% to 10% improvement in PPA. But one technology node is only about 20%, so that’s very significant. And then, with ML outside, that remains to be seen.



Leave a Reply


(Note: This name will be displayed publicly)