The Data Deluge

Cadence’s CEO talks about why the explosion in data is now driving chip and system design and what changes that is creating.

popularity

Lip-Bu Tan, president and CEO of Cadence, sat down with Semiconductor Engineering to discuss the intersection of big data and technology, from the data center to the edge and vertical markets such as automotive. What follows are excerpts of that conversation.

SE: What are the biggest changes you’ve seen over the past year?

Tan: We are moving quickly toward data-driven economics. There are a lot of sensors collecting data. That’s driving the intelligent edge with things like Industry 4.0, where you collect data using very low power and then use that data to improve productivity and efficiency. There’s also another side of this, because not all of that data can stay at the edge. Much of it needs to be processed in the cloud, and with that approach you really don’t need to build infrastructure to drive an application. In some ways, this is similar to what happened with the fabless market, where companies stopped manufacturing their own chips. It’s a more cost-effective and efficient approach. With Microsoft, Google, Amazon and Alibaba, you can process data very fast, and this is just the beginning. Over time, we’re going to see massive scaling of the cloud. That will change a lot of infrastructure requirements.

SE: How do you see the intelligent edge playing with this proliferation of the cloud?

Tan: We’re going to see a lot of ‘vertical cloud’ implementations. There will be a mobile cloud, an auto cloud, and some other vertical markets. This will be very different from what we have today, and it will change a lot of design. We see a tremendous amount of design activity going on in AI, machine learning, automotive and bio-sensors. It’s great for the industry. The semiconductor industry has just passed $400 billion in sales. In the next five years, the industry will grow at a rate of more than double what it did in the past five years.

SE: How far along this path are we?

Tan: This is just the beginning, and there is still a lot of growth ahead. That will drive a lot of new design activity in AI, machine learning, quantum computing, and there will be a lot of new ways to change the computing paradigm. That will drive a lot of innovation. It will change the world.

SE: There is a lot of buzz around AI and machine learning, and a lot of design activity. But the industry can’t support all of the companies competing in this space. How do you see this unfolding?

Tan: AI and machine learning are very broadly impacting the industry. If you look at various different verticals and add all of those up, it totals about $43 trillion. It’s huge. This goes from the data center all the way to the power-efficient processor. The general-purpose CPU will trend down over the next several years, and it will be replaced by GPUs, FPGAs and ASICs. The workload has changed for everything, and that will continue. This is a paradigm shift, and it’s all about data. When you combine that with changes in the supply chain, software and service, new applications in everything from medical and genomic sequencing to automotive, we’re looking at very big changes. With any new paradigm shift, there will be a lot of startups. They all have different versions for specific applications. There are upwards of 200 of these companies. Some will be acquired by the big guys.

SE: Some of these companies are growing very quickly, too. Unlike in the past, the growth rate of companies from startup to mega-companies can be very quick, right?

Tan: It’s definitely possible for that to happen. It depends on the vertical.

SE: Alongside of that, the rate of change is increasing. Companies used to be able to develop chips and recoup their design costs over 1 billion units. That doesn’t work anymore as markets splinter and designs change rapidly. What happens to the design world as a result?

Tan: This is where the excitement comes in. We have an inside and outside approach. On the inside, we’re using machine learning to increase productivity. We’re also providing that in the cloud, so customers can scale their runtime and PPA much faster. They can shorten the design cycle and the verification part and come to market much faster. We’re also trying to optimize our tools for deep learning and machine learning on the processor side. At the heart of it, there is an algorithm compiler to scale acceleration quickly. If you look at GPUs, the functionality is great but the power is very high. How do you modify the design to make it power efficient and really optimized? Some of a design is functionally redundant. You can eliminate that. We also can optimize the training or the inferencing. So if it’s for an industrial IoT application, it wakes up, sends something, and then goes back to sleep. But it has to wake up quickly. That’s the key. Many startups are doing that, and some of the big guys are doing that, as well.

SE: There is much more emphasis on time-to-market and cost reduction, but at the same time the cost of development is rising. How do we get ahead of this trend as an industry?

Tan: As we move down to 7nm, we worry about how we’re going to get down to 5nm and 3nm. One piece is cost and how anyone is going to be able to afford it. The other pieces are how you can get first-pass silicon, and how you can do that with lower power. Our job is helping customers design sophisticated silicon, and now it’s becoming more and more complex. And on top of that, how do you do the packaging? You really have to look at this from a system point of view and then work backwards to figure out how to reduce the cost. We have to work very closely with our customers, our foundry partners and our IP partners, to make sure all the pieces work together, and then really push to optimize everything.

SE: Cadence was one of the early proponents of system-in-package and advanced packaging, in general. In fact, the company was probably overly optimistic about how fast the market would take off, because it’s really just taking off now. Do you see that as a strategy for building platforms and interchangeable pieces, like chiplets?

Tan: Absolutely. That’s one of the reasons you need to look at this holistically, from a system point of view. Besides the silicon development, you need to look at the power envelope and the packaging, and even how it all works on a board. This includes everything from ensuring signal integrity to system modeling and simulation. It requires a total system solution.

SE: Let’s swap topics here. As we move into assisted driving and ultimately autonomous vehicles, who’s going to be driving this? Will it be automotive OEMs, or companies like Apple and Google?

Tan: I drive a Tesla, and I consider it an extension of my computer. It has my calendar, my favorite music and my phone. When we get to level 4 and 5, hopefully, I will be able to do even more. That’s the race. Everyone, from OEMs to Tier-1s to some new people coming in from the IC side, are trying to drive this market. We still have a long way to go. We’re at level 2. And all the sensors from LiDAR and radar means you will need a supercomputer in the trunk. Over time, this will get smaller and more cost-effective. I’m confident we’ll get there. A lot of people are working on it. We’re making a lot of progress.

SE: Everything is up for grabs in this market, though, right? You aren’t assured of a place in this market, even if your in a leadership position today.

Tan: Yes, but that’s pretty much like any industry, and it’s what’s exciting about innovation. The business models will change.

SE: One of the issues in automotive is the electrification of cars. Just saying everything is going to be electric doesn’t seem to be a rational approach because you can’t just run a 440 volt line anywhere there’s a gas station today. So how do we get there? Do you see a mix of fuels?

Tan: There will be both. There will be a lot of charger stations for quick charge. But in some other places, you can’t have that many charging stations, so that’s going to be a challenge. If you have a hybrid car and an electric car, you will probably count the miles to the next charging station. If you can’t get there, you take the hybrid car. But hopefully the battery range will be extended so that instead of 250 miles, you’ll be able to go 350 miles on a charge.

SE: To make this autonomous, we’re going to need very sophisticated electronics. Most supercomputers are very expensive. Can we shrink that much capability into an affordable device?

Tan: Yes. A cell phone today was a supercomputer a few years ago. There are a lot of startup companies looking for funding that are developing innovative, power-efficient designs.

SE: In another vertical slot—or in this case, a number of verticals—the industrial IoT is thriving, but not with any consistency. There are a lot of one-off solutions. How does that fit into the chip design world?

Tan: Right now there is a lot of focus on vertical industries, so you can collect data and process it in a very cost-effective way. This is where machine learning comes into play. It plays across a lot of markets, and a lot of machine learning is about the compiler technology. Whoever has the best software will win, which is why companies like Google are so aggressive in this area.

SE: How does EDA fit in? Can you take the tooling and IP and apply it?

Tan: Yes. We are using machine learning and deep learning on the inferencing side to help our customers. That’s showing the best way to implement technology, and our customers are embracing it—particularly the Tier 1 and leading customers. They embrace it because we can show them how quickly EDA will help them develop technology for their applications.

— Photo by Paul Cohen/ESD Alliance



Leave a Reply


(Note: This name will be displayed publicly)