CEO Outlook: Rising Costs, Chiplets, And A Trade War

Experts at the Table, Part 2: Opinions vary on China’s technology independence and its ability to develop key technology internally.

popularity

Semiconductor Engineering sat down to discuss what’s changing across the semiconductor industry with Wally Rhines, CEO emeritus at Mentor, a Siemens Business; Jack Harding, president and CEO of eSilicon; John Kibarian, president and CEO of PDF Solutions; and John Chong, vice president of product and business development for Kionix. What follows are excerpts of that discussion, which was held in front of a live audience at the ESD Alliance. To view part one, click here.


L-R: Wally Rhines, John Chong, John Kibarian, Jack Harding. Photo: Paul Cohen/ESD Alliance

SE: Will costs continue to rise at leading-edge process nodes, or will they subside over time to the point where more companies can utilize those nodes?

Rhines: We’ve always had forecasts of enormously escalating costs where we expected only a half-dozen companies would fill a reticle for 7nm design. And then we ended up with hundreds. So how did this happen? One thing is that it doesn’t cost as much as we thought it would. Another reason is that a lot of the cost increases are costs that used to be in other places. So now you’re asking the chip design team to do the system engineering, verify the embedded software, maybe even write the embedded software and do all the things you used to do for the whole system. And yes, that is more costly than laying out some transistors and simulating it. So it’s not an apples-to-apples comparison. But despite the cost of doing new computer architectures at leading-edge designs, in 2017 and 2018 the amount of venture funding going into fabless startups has soared. We now have $2 billion a year just going into artificial intelligence chips, and we have people doing very complex architectures. And they’re doing it on a shoestring. These companies are financed well, but they’re not financed in hundreds of millions of dollars. Up through the first few rounds, it’s $10 million to $20 million. We just had an explosion of new companies, and one of the reasons is that we’re automating more and more things. The people doing those AI chips are using high-level synthesis for the datapath. They can’t possibly go in and tweak RTL to get an algorithm to work. They’re writing in C++ and they compile it. And you can look at the publications from Google, Amazon, and everything Nvidia does. The differentiation in the datapath is done in a high-level language. You can do 500 to 1,000 times as much simulation as you could with RTL. So it’s a lot cheaper, you find bugs more easily, and that will continue to happen. We’ll continue to find ways to design more complex chips with less and less effort.

Chong: Tools and capabilities are making it more accessible to a wider range of people. But also, there’s increasing demand because people want custom stuff. Just taking AI and machine learning, that was done with microprocessors and graphics chips. But now people are not getting the performance they need, so they’re opening up the constraints to design something custom. So there’s a big influx of people due to both the accessibility and the capability. It’s being made available to a wider range of people, but there’s also increasing demand to do stuff that is different from what is generally available in the mainstream market.

Rhines: If you’re telling your people, ‘You can’t afford to do the most expensive process, find something else,’ the something else you can do is find a new architecture so that you do domain-specific chips. The chip may do a limited number of things very, very well, and as a result you can build a system that is higher-performance, lower-cost, and has all the benefits you would have had trying to shrink a general-purpose processor to do the same thing.

Chong: The business model is a little different, too. Five years ago, hardware was out. No one wanted to invest in it. It took too long, and everything was about software. Now, the pendulum has swung back. Hardware is in, and it’s not because you’re making money on the hardware. The hardware is the enabler.

Harding: The fundamental trend that drives that phenomenon is specialization versus general-purpose. Using a GPU from Nvidia for an ML application is about 84% inefficient. You waste 84% of that part. If you’re deploying millions and millions of graphics processors at Google, you’ve got a pretty big incentive to go build a TPU instead of buying a GPU from Nvidia. That’s true across the board. If you’re a Cisco, you can buy a Tomahawk 3 or 4 network processor for your router or switch and then use the same processor that Broadcom will sell to your white-box competitors at a 20% gross margin. But if you’re Cisco, you want 50% gross margin. The only way to have that advantage in the market is to customize hardware. You can argue it’s a long time to revenue and it’s expensive, but if you’re concerned about differentiation or power management in the data center, or just the efficiency of how many machine learning parts you can deploy, you literally have no choice but to make your own chip. That’s why we see Facebook now has their own development teams. The payback is enormous. We’re going to continue to see that. From an ASIC perspective, we saw a gradual softening in the business from 2014 to 2017, but now it’s billions of dollars up and to the right. It has rejuvenated the entire marketplace. Everyone wants their own chip, but they want to do it under the ASIC model.

SE: Where does the chiplet model fit in?

Harding: Every chip we make has a high-performance SerDes in it. Development of SerDes is very expensive. If you have a 56 Gbps SerDes, you really don’t care what process it was developed in, and there’s no reason to chew up a bunch of 7nm or 5nm area with something that is process-indifferent. If it works, it works. So in our case, I see us moving to chiplets for anything that is heavily analog or which doesn’t scale particularly well, or where there is no application benefit to integrating it with RTL at the smaller process node.

Kibarian: The reason that will happen is scaling isn’t very effective as you go from 7 to 5 to 3nm. You want to get more compute for that CPU, and you’re limited to an 8cm2 reticle. So you start having to tick off everything else on the silicon just to give you a way to double the number of processing elements. What do you not need to move forward? What’s easier to leave just where it is? The chiplet will happen for a couple reasons. There is the cost reason. But there is a risk reason, too. So you don’t move the SerDes forward. You’re also delivering more benefit at the architecture and system level because it’s giving you more area on the leading edge that can be used to differentiate that hardware. That is the biggest driver of it all. If you’re trying to get something that’s unique, you want to get every piece of that leading-edge silicon for differentiation.

Harding: The development and bring-up and reliability of the SerDes is probably the longest pole in the networking chip. If you can use the same one you had in the previous generation and not have to tend to that, 15% to 20% of your cost would go away, but it probably would be 50% of your risk. There’s a lot of pressure to develop a chiplet strategy, and once people start to get comfortable with it you’ll see it for lesser technologies with lower ROI. But the process for the integration will be so well known that you’ll ask, ‘Why not?’

Kibarian: The number of customers doing a full 8 cm2 chip and using it all used to be the network processing guys and the highest-end server guys. That was basically it. Now we have startups doing that for their first chip. There’s a lot of innovation there and they can make it work. But if you don’t put everything else on a chiplet as you go from 7nm to 5nm, you’re not going to get a doubling.

Harding: There’s also a growing percentage of SRAM in this class of chip. If you’re doing a machine learning application, the available memory is critical to the application. So if you have a choice of off-chip communication between a chip and a chiplet, regardless of whatever degradation problems you might experience, you’ll make it up 10 times over by have more available memory on the die to run the machine learning application.

Chong: This mirrors a lot of what we’ve been going through (with MEMS sensors) to make things more modular. We can mix and match some MEMS with ASICs to create new specialized products by only changing one part of a design. We have chiplets, both because some of the technologies are incompatible, but also because the modularity helps us bring products out more quickly with less architectures.

Harding: Your architectures are perfect for that. You’ve been making chiplets for your entire existence.

Chong: It’s a Holy Grail for us to bring the two together because a monolithic system would be much less cost, but the tradeoffs in design freedom—you’re trying to cram two different outcomes with the same technology—never made it worth it. Most companies today do separate sensor elements and compute elements.

SE: Let’s shift direction. What’s going on with China? What’s is the potential impact of an ongoing trade war?

Rhines: There are a number of facts about China that we need to accept. They have more people than we do. It takes a large population to cost-effectively support 5G. You really need cities of 10 million people. We don’t have many of those in the rest of the world, and they have a dozen of them. They’re going to do a bigger 5G rollout sooner than we are. If you’re doing artificial intelligence for medical diagnostics, and you have 1.4 billion people who are forced to provide complete medical data to your database, you’re going to come up with second- and third-order effects we’re never going to see. So there are certain things where there was more data flow and technology flow back and forth. There are certainly other issues here, too. One of the issues we’re not addressing is the investment being made in China. They had a $20 billion government investment in 2014, matched 5:1 by private equity. They just had another injection of $47 billion, that’s going to be matched. This is far, far beyond what any governmental investment in the U.S. will provide. One of the reasons is that the ZTE problems really brought to light the dependency of China on U.S. technology, and more recent actions have reinforced that. So that train is not going to turn around. The Chinese are going to do everything they possibly can to be autonomous and independent of the United States as a supplier of chips. We’re going to be the vendor of last resort, and that’s a problem. That’s something that we may be able to overcome with better trade relations.

Harding: The semiconductor industry is about $500 billion a year, which is about half the size of WalMart. About 10% to 15% of the revenue is derived by shipping to China. ZTE is our largest customer, so last summer when the ban was in place it was very painful for us. We were working on a very large chip for them, and decided to keep working in the dark all summer until the ban was lifted. I went over to China and met the new board of directors and the 35 new managers and they looked at me and said, ‘Who are you?’ I said I’m the guy who has your new 5G chip ready to tape out. It was strange, and all of that was an artifact of the anger they felt for the U.S. government threatening the very existence of their corporation. Fortunately, there were enough people who still had their jobs and knew us. In the newspaper they called it their Sputnik moment. They had this awakening that they were completely vulnerable and that would never happen again. We’re on a slippery slope toward being designed out of the China marketplace, notwithstanding the huge demand and their technical reliance upon us. I was at a Morgan Stanley conference and I heard two extremes. One is we’re going to get designed out and we’re in trouble. The other extreme is, notwithstanding anything they want to do, they’ll be reliant on us for 15 to 20 years. If you look at Huawei’s router and switch technology, they’re excellent because they go to the carriers and say, ‘What do you want?’ The carriers give them the spec. But they do almost no business at the enterprise level because it’s incumbent upon them to come up with the spec to sell to all the companies, and they don’t have the innovation skills. Their culture and corporate bureaucracy does not allow for that kind of R&D.

Kibarian: We do a lot of business with China. We saw the same thing with investment in semiconductors, and if you were to poll them a year ago, it would have been a lot of money going into factories and not a lot would come out. When you would talk to the consumers of ZTE and Huawei’s of the world, they were not super excited about the potential of using manufacturing in-country compared with what is available in Taiwan or Korea. ZTE was a Sputnik moment. They realize they are exposed and they can’t let this happen again, so now they need to invest. But on the other side, we are an industry that has an integrated supply chain dating back at least until the 1980s. It was Japan first, and then Taiwan and Korea. It’s set up around an open market where we all interact. I don’t think any country will ever get to be independent from the rest of the supply chain, but you can get to a level of parity across the supply chain. Now there is the ability to shut people off without a lot of recourse. Eventually you get to the economic equivalent of mutual self-destruction. It will have to get there because people can’t unwind. It will never go back to the way it was before, but it also will never get to the point where everyone is walled off and doing their own thing.

Chong: When you look at software, Alibaba compared to Amazon, WeChat, they are able to duplicate what the rest of the world has and what the U.S. has brought out and globalized.

Kibarian: But no one can do 5nm without an ASML EUV scanner. There is only one place in the world for that technology. They couldn’t even make that thing work for 10 to 15 years. It’s been late for at least 10 years. It’s not a short path.

Related Stories
CEO Outlook: It Gets Much Harder From Here
Part 1: As power/performance benefits shrink at each new node, engineers are turning to different chip architectures and new materials.
New Design Approaches At 7/5nm
Smaller features and AI are creating system-level issues, but traditional ways of solving these problems don’t always work.
Chiplet Momentum Builds, Despite Tradeoffs
Pre-characterized tiles can move Moore’s Law forward, but it’s not as easy as it looks.
Getting Down To Business On Chiplets
Consortiums seek ways to ensure interoperability of hardened IP as way of cutting costs, time-to-market, but it’s not going to be easy.



Leave a Reply


(Note: This name will be displayed publicly)