From Physics To Applications

eSilicon’s CEO zeroes in on the impact of AI and advanced packaging on ASIC design.

popularity

Jack Harding, president and CEO of eSilicon, sat down with Semiconductor Engineering to talk about the shift toward AI and advanced packaging, and the growing opportunities at 7nm at a time when Moore’s Law has begun slowing down. What follows are excerpts of that conversation.

SE: Over the past year, the industry has changed its focus from shrinking features and consolidation to all sorts of new applications. How has that affected your world?

Harding: Yes, it’s been a healthy shift from physics to applications. We used to talk about going from 16nm to 7nm and all the way down to 1nm. Now we’re talking about AI chips, which are more regular than networking chips. There are fewer unique blocks, so they’re easier to build. The power requirements are there, but they’re a little more forgiving. We’re seeing a class of chips on the horizon that are difficult to architect, but easier to lay out and manufacture and test. You still need state-of-the-art technology, but the discussion is more about the data architect and the neural network architect and their needs versus just taking a network chip and grinding it down into another version of Moore’s Law.

SE: Are there more elements scattered around a chip, as well? So instead of one type of processor, you may have multiple processors.

Harding: We haven’t seen a lot of that yet. We do see one very large die with three or four flavors of processors repeated. These are reticle-size die, surrounded by four HBM stacks eight-high. I saw an architecture a few weeks ago that was one die with six stacks of HBM across the top and six stacks of HBM across the bottom for a total of 96 gigabytes of memory on one module.

SE: Is this for the server side?

Harding: Yes, these are bound for the data center. The insatiable appetite for machine learning performance is driving packaging to limits that are well beyond the complexity of the die.

SE: Where else are you seeing market shifts?

Harding: We’re getting a lot of traction around 7nm IP. Seven or eight years ago we de-focused on internal IP and our value-add was letting the customer select the IP. As we started getting into the finFET world we started seeing that integrating third-party IP was becoming more of a challenge, so we realized we had to swing back in the other direction. A lot of people are integrating third-party IP like HBM, TCAMs and even SerDes. We now have all of that built in-house. It’s the same metal stack and testing structures. As the number of design starts shrinks—and in particular the number we want to work on shrinks, because the volume is so low due to the fact that many of these are science experiments—people are looking at a $10 million mask set and $150 million tied into the RTL. If you have something that will save a re-spin or six months on the back end in testing, that generates a huge amount of interest. The number of ASIC suppliers is shrinking, and the number of companies that have a full IP portfolio that is predictable and test-able is down to about two.

SE: What is the market looking like for 7nm?

Harding: There are lots of 7nm tapeouts, but there is no 7nm volume other than the obvious guys. You can count on one hand the volume players at 7nm. There are another 100 companies that will tape out, but at least half of those are science experiments.

SE: Any idea what those chips are being used for? Is it AI and machine learning?

Harding: Yes, and the informal number of AI startups is pushing 70. The vast majority of them intend to go to 7nm because they need that density to handle the data flow, but the vast majority of those will disappear or be acquired. This reminds me of the early 1980s when there were about 200 EDA companies, and 10 years later there were 50, and 10 years after that there were 5.

SE: There’s also a big push into advanced packaging. What’s happening there?

Harding: The class of chip we’re making now is impossible without an R&D-level knowledge of the packaging technology. We got into 2.5D packaging about 5 years ago when one of our customers asked us to do joint R&D with them.

SE: And everyone thought it was going to be simple, but it didn’t turn out that way, right?

Harding: Correct. It’s never been simple. We’ve done about 10 test chips in 2.5D. We have 3 in production and 2 more about to tape out.

SE: Where were the surprises?

Harding: Yes, and until you make one, you don’t realize the challenges. With some of the modules we’ve made in the past we’ve had to work through things like warping issues. The diagonal distance is such that the physics change due to thermal considerations and other factors across the module itself.

SE: Are these mostly for the networking space?

Harding: Yes, using a broader definition of networking. We have 2.5D working in multiple generations. We’ve solved issues like warpage, signal integrity, testability, yield, issues around the interposer and other parts of the module. Making the die is more straightforward than getting it into the module and making sure you can make tens of thousands at a time.

SE: Any plans to move into chiplets?

Harding: We’re not making chiplets today, but we have looked at it. My personal view is that it’s an inevitable part of module development and, more broadly, chip development.

SE: This is basically subsystems and hardened IP, right?

Harding: The obvious thing would be to use a SerDes, which is one of the higher risk part of the module, and if you understand it and it’s predictable, you don’t necessarily have to go to the next process node. There’s nothing wrong with having the SerDes chiplet on the module running at whatever performance you need. A 28G SerDes is a 28G SerDes, whether you do it in 3nm or 300nm. There is a large number of candidates where chiplets will make sense. Once you decide you’re going to put multiple things on one module, it’s not any harder to add another thing. It’s just a question of area and power and performance. There is a lot of reuse opportunity in chiplets that will become more evident, rather than going to 7nm and 5nm—especially for analog, which doesn’t scale.

SE: The initial versions of advanced packaging were all at the same node, while the initial driver was the ability to mix and match from different nodes. Is that going to change?

Harding: You’ll see a lot of IP hardened into different form factors that will reduce the amount of expensive area at smaller nodes.

SE: How about full 3D stacked die? Will it happen?

Harding: We believe it will. We have initiatives underway right now to do that. We don’t make one thing to sell to many people. Whenever someone needs an ASIC, by definition they’re giving us a set of requirements that causes us to stretch down one axis or another. We’ve seen several architectures from our customers that are ideal for 3D. We’re committed to delivering that capability.

SE: Is this a result of the slowdown in Moore’s Law?

Harding: In the last 36 months we’ve heard from many executives who say are tired of funding Moore’s Law evolution. They’ve told us they’re going to put more pressure on their designers to improve those designs through architecture. That’s why the chiplet has a future. People are saying they’re not going to spend $30 million on a mask set and do two re-spins for another $30 million. The next few nodes will be for a handful of chips that can drive the volume. If you believe the numbers around machine learning, that might be a market for the next technology node.

SE: Machine learning, deep learning and AI are interesting because they can be applied horizontally across a number of vertical markets.

Harding: Yes, and that’s the bigger point. Machine learning will grow across multiple areas, including areas we can’t consider today. The breakthroughs that are on the horizon in that space will outpace the expectations we had for IoT, where there would be a chip in every toothbrush or any other device. We’ll start to see an insatiable appetite for artificial intelligence, particularly once it gets rolled into robotics or autonomous anything, not just cars.

SE: How does that change design?

Harding: In the past, if you were developing a networking chip, the architect of that chip had a very good working knowledge of semiconductors. Today, we’re talking to data scientists who know little to nothing about semiconductors. Oftentimes, the architectures they hand to you are unrealistic. We go back to them and say, ‘Do you realize this chip will be five inches square and 1,000 watts?’ The data scientists get frustrated because they know what they want to solve their system problem, and it can’t be done in semiconductors. They’re also constantly changing their minds about what they want. In the past couple quarters they’ve come to an understanding that there’s a box they have to operate within. The physics have limitations. Our response was to announce a neuASIC platform. We’re now building mega-cells that can map into all of the elements you find within the Nvidia software environment, and we’re hardening them. We have a library where you can call them up and they will perform convolutions and multiply/accumulate functions. So the architect can build their chip that maps into the Tensor software they’ve been using. We have machine learning predictive software we use to tell us the power/performance and area, and show these architects where they’ve exceeded the process technology and where they have headroom.

SE: There has been a lot of talk about replacing GPUs and FPGAs with ASICs, but how do you keep up with the changes in the algorithms?

Harding: You need programmability in these mega-cells or tiles, which allow people to adjust their architectures as needed. We’re not trying to compete head-on with Nvidia or AMD. But we are saying they’re using things in software that are in the public domain that we can make 100 times more efficient either in terms of performance or area. We’re getting a lot of good feedback from the data science community. They do want to get away from off-the-shelf GPUs, but they still want the flexibility of open-source software. We’re trying to navigate between those two worlds. We’re getting some good traction there.

SE: Let’s swap direction here. Where does eSilicon play in the automotive world?

Harding: We supply a couple large companies that supply the automotive world. It takes a long time to get in. But once you’re in, you’re there forever. We’re selling chips we built 10 years ago.

SE: And you have to support them for a very long time, right?

Harding: Yes, our agreements are for 20-year support. You also need a higher standard of automotive testing, which we did about a decade ago. We’re a qualified automotive shop. We’ve largely steered away from this market because of the long lead times, and competition is significant, but we do see machine learning opportunities. The kind of company that makes automotive components today has not been making 3 billion networking chips. When those modules show up in the car, it shifts away from the guys who have been making a million of those chips a year, successfully and competently, to those of us who can make the one-off game-changer for a parts supplier or an automotive company.

SE: What do you see as your big opportunity over the next 12 months?

Harding: The 7nm development platform, and in parallel with that, our machine learning ASIC platform. The new ASIC uses everything in the 7nm platform that has all of these mega-cells for machine learning. More broadly, we believe it’s going to tremendously improve time to market and reduce risk. We will allow companies to use pieces of it and put it together themselves, or we can put it together for them. That’s a more realistic way to go to market these days. There are 100,000 variables, but it only takes one to mess it up. Realistically, a lot of companies want to get comfortable with the technology before they bet on us for the functional design. Think about the platform that includes 20 years of institutional knowledge from hundreds of test chips.

SE: One last question. Are you getting to the point where you see a way forward for mass customization?

Harding: Yes. We’re talking to a company now that needs HBM technology that is 20% faster than what we have off-the-shelf, and they want to change the orientation from north-south to east-west and put in some other bells and whistles. We’ve been customizing IP here for a long time. We consider that core to our strategy. Most IP companies are not willing to customize. For jelly-bean stuff, that strategy makes sense. But for IP that differentiates you, that’s more difficult and it’s where the big payback is.



Leave a Reply


(Note: This name will be displayed publicly)