Executive Insight: Lip-Bu Tan

Cadence’s CEO digs into machine learning, advanced packaging, and the shift from chip design to system design.


Semiconductor Engineering sat down with Lip-Bu Tan, president and CEO of Cadence, to discuss disruptions and changes in the semiconductor industry, from machine learning and advance packaging to tools and business. What follows are excerpts of that conversation.

SE: What do you see as the next big thing?

Tan: Unlike mobility or cell phones, or PCs before that, there is no single next big thing. But there are a lot of little things, and if you add them all up, that is much bigger and very exciting. The IoT, autonomous driving, and machine/deep learning will be applied very broadly. I’m very interested in the whole industrial IoT (IIoT) landscape. Artificial intelligence will be a big opportunity as well. There are a lot of great opportunities with several disruptions in terms of innovation. If you look at the data center and cloud infrastructure, there is a sea change going on with big data and data analytics. This is a huge opportunity for those who own the data because they can do a lot with it and monetize it. There is a race to collect data from the IoT and from cars. Hyperscale web services will be huge, and it’s going to be very interesting for the whole semiconductor industry.

Photo credit: Paul Cohen/ESD Alliance

SE: What does it mean for EDA? The EDA industry has been functioning along the lines of, ‘Design one chip, make derivatives, and sell billions of units.’ A lot of these markets are, at least at this point, much smaller. What do you have to do in design?

Tan: There are a few implications for EDA. First, machine/deep learning will play a big role in design. At Cadence, we’re deploying it massively into all our tools, from the digital side to the verification side. This will allow you to create designs faster, deliver the designs more quickly, and they will also be more accurate. We’re applying machine/deep learning to our own tools to support this industry shift. Second, there’s a huge change in terms of the customer requirements. We’re moving from EDA to system design enablement to meet some of those requirements. Part of the whole shift from EDA to system design enablement is about optimizing for system requirements. That also includes 2.5D and 3D advanced packaging. But it’s not just silicon packaging. It’s also entire system packaging. Along with that, you need system analysis, system modeling and simulation. You cannot just wait until tapeout to find the power is too high and that you have a signal integrity issue. It’s too expensive to go back and re-spin. For advanced nodes, you really have to get it right the first time.

SE: How about IP? What’s changed there?

Tan: Silicon has become so complex that to build an SoC, IP has become much more important. The whole IP initiative is about providing building blocks that a system company requires. But the quality of that IP is critical for a customer to be successful. At the end of the day, we’re providing a solution to the customer. More than 40% of our customers are system companies and service providers. They’re all quietly building up and optimizing the solutions they need. We just have to listen very carefully to what they need and be a great partner to support them. That requires us to have a culture of innovation, because there aren’t too many companies left to acquire. Over the last three years, we introduced 23 new products.  Our customers are counting on us to continue developing the next generation of tools and IP to meet their requirements.

SE: You mentioned 2.5D and some of the advanced packaging. Cadence has been talking about this for a couple decades but it only recently has started gaining traction. What’s different now?

Tan: Customers are ready for it. There is huge demand from the customer side for 2D, 2.5D and 3D. And for some of the key applications like high-speed SerDes, it is critical to have a low-power version of the IP, so silicon photonics is critical. This is music to my ears.

SE: Does IP have to be characterized differently to fit into a 2.5D or fan-out wafer-level packaging versus a 2D implementation?

Tan: Yes, and this is a good opportunity for us. Clearly the silicon photonics packaging is good for solving bottlenecks in some of the high speed 56- or 112-gigabit-per-second throughput required by hyperscale web service companies. It’s the same thing with the 2.5D/3D for some of the SerDes and other key applications. We have to calibrate and meet the customer requirements with our IP, and there has been a lot of collaboration with our customers. We’re going through a new phase in EDA where we have to work very closely and collaborate with our partners. We can really drive the next-generation requirements from the IP, packaging and verification points of view through the joint work with our partners.

SE: Five years ago, when most people thought about advanced packaging, it primarily centered around the fact that analog IP does not scale. Therefore it would make sense to leave that IP at the process node where it was developed. Most advanced packaging so far involves shrinking everything, though. Is that going to change? Will companies start mixing IP developed at different nodes?

Tan: It depends on what applications and what markets you go after. Each has different requirements. We try to form fully integrated solutions, not only on the analog side, but also on the mixed-signal and digital and verification side. This is all intertwined together. You cannot just depend on one. Back to the SerDes side, right now we are pursuing into the very high-end 56, 112Gbps SerDes, and beyond that, because this is what the customers require. We are laser-focused on high-speed SerDes and how we’re going to get there to support our customers.

SE: Can the tools and methodologies in place now handle all of this?

Tan: We continue to upgrade the tools on multiple fronts. First, we are clearly addressing the need for designing at the most advanced nodes. Second, some of the solutions require more than just a single tool. There may be packaging and key IP we need to bring on board to support that. We also have to look out for new frontiers and new technology, which involve Israel, China, or the U.S. We’re reaching out to top professors around the world. We’re looking at the next-generation requirements and figuring out how we get there. And we’re getting expert views and collaborating with engineering leaders.

SE: Are there any startups coming out in these areas? If not, does that put a bigger strain on your operating budget?

Tan: We take multiple approaches, which is really the fun part of innovation. First, we have strong capabilities to drive the next-generation tool requirements with machine learning, global optimization and all the different techniques that really drive optimized PPA and run-time improvements. We also engage early with professors and other experts to bring in new technologies that integrate with our tools based on these engagements. Sometimes the customer will point out to us what they need, and we’ll bring in other ecosystem companies to help.

SE: We’ve seen an oscillation in the industry, first where most of the real functionality was in hardware, and then to where hardware was used as a generic platform for software. That turned about to be not terribly efficient in terms of power and performance. So now we’re bridging those worlds with software-defined hardware—everything is becoming more specific. The problem is that the hardware industry has been losing engineers on a pretty regular basis, whereas the number of software engineers is exploding. Do we have the capability and manpower to make the kinds of changes the industry wants to make?

Tan: Very good question. This is an issue that is very dear to me. Clearly, a lot of professors and great universities think the semiconductor industry is a sunset industry. I tried to convince them it’s not, which is important because those professors really influence choices made by engineering students. We have a special university program that reaches out to the best students at key universities. We also engage heavily with the professors to make sure they understand that EDA is still very important and relevant. As you pointed out, everything is software-defined in devices, from the networking switch all the way to the components in an autonomous vehicle. Software becomes more and more important. For example, we have a very low-power programmable engine in Tensilica, which is perfect for AI and machine/deep learning. But where do you get the software stack on top of it, with the compiler, optimizer, thermal simulation for the different applications for automotive or genomic sequencing? We have to compete with Facebook and Google to attract top talent for the EDA/semiconductor industry, which is not easy sometimes. I have two boys majoring in computer science and I tried to convince them to do hardware, but they both decided to do software, which seems to be the future for young kids. I’ve tried to convince them to combine EE and computer science because hardware plus software is the Holy Grail. That said, we are fortunate to be surrounded by a talented pool of engineers in Silicon Valley and have had a lot of success with acquiring top talent.

SE: You’ve been talking a lot about machine learning and AI. How do you see that market playing out internally, as you are using it for your own products, as well as externally? Is it going to be a big market itself? Will it start limiting jobs?

Tan: Machine and deep learning are a very high priority for me. These already have had a very broad impact on the industry, not just for IIoT or hyperscale web services applications or genomic sequencing. The impact of both of these will be huge—as big as the CPU and GPU. But there is a new class of hardware and software that will be driving the machine/deep learning. That application is very cool. I keep track of 35 different startups globally in the machine/deep learning space. I spend quite a bit of time learning about what they are doing, and they’ll need a lot of our EDA tools to support their projects. Meanwhile, we’re also using machine/deep learning internally, and we apply it to our own tools to make them faster and more accurate based upon previous experience. Customers are collaborating with us, and I’m excited to see early successes.

SE: You have very specific needs for how you’re going to use machine learning. But when you go out in the general market, there are a lot of different competing architectures—TensorFlow and a few others. So how do you come up with tools that will actually work across this industry? There are so many things changing that in order to develop tools, you now have to hit a lot of different markets.

Tan: The key is linear scaling. When you have 32 cores you really need to have 32X performance, optimized for machine learning. It’s not just the silicon itself. The software stack needs to be optimized for it, too. We’re really embracing this from the tool and the IP side because we want to be partners with others in our ecosystem. There is a lot of technology and architectural change, and we’re learning a lot by working with these companies. But all of this really depends on the application. Automotive and data center/cloud are huge topics. Data analytics and data requirements are very different loads that we’re trying to address. Applications need to have more and more intelligence so they can program more bandwidth in a very dynamic way. Our tools help to optimize the evolving software requirements.

SE: Is there any consistency that allows you to say, ‘Here is the big opportunity for this type of tool?’ Or is it still in an embryonic stage that we haven’t quite gotten to it yet?

Tan: It’s not just one tool fixes all. That’s why we’re working in collaboration with customers, which is really important. As part of the Cadence culture, learning from and working with the customer is key.

SE: The term we’ve heard is “mass customization.” That’s really what we are trying to get to as there are so many implementations of everything. But can we do that with the kinds of tools we have today, or do we have to start changing the EDA strategy?

Tan: That’s why we have new organically built tools. The emphasis is first on massive parallelism and scale, and the second is on driving machine/deep learning from the experiences we have in driving faster results and runtime. Today, we’re able to utilize massive cloud capacity to drive our engineering requirements so that we can do these massive architectural changes.

SE: Automotive is a heavy user of machine learning. When you’re driving down the road, your car has to be able to identify what’s in front it, among other things. Will this roll out the way people think it will and in the right timeframe?

Tan: That question is near to my heart as I drive my Tesla to work and back home every day. Currently, the autonomous driving is more like level two (the highest is five). Your hands still have to be around the steering wheel to make sure you don’t crash, and LiDAR is still in the early stages. I’ve been evaluating level 4/5 sensors and found one that I really like. It’s the same with the software. We have to really build the foundation on the software. But moving to level 4/5 is nearer than we thought.

SE: How near?

Tan: It’s hard to tell because it depends on the number of test cases. There needs to be enough data to certify the safety of automotive components, the insurers must feel comfortable and the consumers must feel safe enough to drive an autonomous vehicle. I’m excited about the progress we’ve made. Some of the new startups are very interesting, and the major automotive companies—Tesla, GM, Ford—are all heavily investing.

SE: Where does EDA fit in?

Tan: The automotive industry has to adhere to functional safety requirements that are in accordance with various standards such as ISO 26262, and EDA plays a significant role in safety standards. The other part where EDA plays a role is with sensor requirements, which address the materials and frequency of the sensors. We are looking at every opportunity, including high-speed connectivity, which is critical to provide the performance automotive designs need. We are supporting all the key component players and reaching out to CTO thought leaders on the automotive side frequently to ensure that we are doing everything on the EDA side to enable them to meet their safety requirements.

SE: This is potentially a huge market for EDA, as we have to verify these chips will last for 10 to 20 years. But can companies do that with 5nm or 7nm chips if there has never been any history of use in any market?

Tan: So far, they haven’t asked about 5nm/7nm. They’re mostly focused on mixed-signal. But we’re pushing hard on 7nm with our key customers and foundries. Now we’re embarking heavily on the 5nm and 3nm processes. I always evaluate the financial side too. If there’s a real customer willing to pay for something and there’s enough volume, then we want to support it. But back to your automotive question, this industry provides a great opportunity, and we don’t take it lightly. But its evolution is not going to happen overnight; we have to be patient. I’m a big believer in data center hyperscale. They require a big CapEx. Industrial IoT is real; from the GE point of view and the Siemens point of view, it’s real. EDA has to transform itself into a system design enablement model.

SE: Looking out at the industry, is it really EDA these days or something else?

Tan: EDA is transforming more into a system-design enablement model, providing solutions to the system service provider—and, of course, customer enablement.


realjjj says:

In auto the change is much more significant.

You don’t need it to last 10-20 years, you need it to last a few years 24/7, with car as a service. It’s not about decades anymore, it’s about millions of miles.
Car as a service does for transportation what the smartphone is doing for personal computing.

Folks need to remember that the car has never been affordable, at a global level, not even close. Car as a service can do that and the miles traveled by car will go to 50-100 trillion per year.

You got the cars, the servers that back the fleet, the in car services, the robots that clean and repair, the infrastructure to power these EVs,

Ed Sperling says:

That’s a really important point. As we evolve toward less ownership and more efficient services, this will radically change how we model, simulate and verify reliability.

Leave a Reply

(Note: This name will be displayed publicly)