Bespoke Silicon Redefines Custom ASICs

A fundamental shift in the economics of processing and new use cases are making ASICs cool again.

popularity

Semiconductor Engineering sat down to discuss bespoke silicon and what’s driving that customization with Kam Kittrell, vice president of product management in the Digital & Signoff group at Cadence; Rupert Baines, chief marketing officer at Codasip; Kevin McDermott, vice president of marketing at Imperas; Mo Faisal, CEO of Movellus; Ankur Gupta, vice president and general manager of Siemens EDA’s Tessent Division; and Marc Swinnen, director of product marketing at Ansys

(Top left to right) Kam Kittrell, Cadence; Rupert Baines, Codasip; Kevin McDermott, Imperas; (lower, left to right) Mo Faisal, Movellus; Ankur Gupta, Siemens EDA; and Marc Swinnen, Ansys.)

(Top left to right) Kam Kittrell, Cadence; Rupert Baines, Codasip; Kevin McDermott, Imperas; (lower, left to right) Mo Faisal, Movellus; Ankur Gupta, Siemens EDA; and Marc Swinnen, Ansys.)

SE: The term ‘bespoke silicon’ is becoming more popular. Is there an agreed-upon definition?

McDermott: The idea of bespoke silicon started off with ASICs. When my career started, we were designing custom silicon because we just wanted the right thing to do the right job at the right time. And we wanted it our way, so doing an ASIC was a natural choice. At that time, 100,000 gates was a complex design, the NRE fee was not too bad, and volume was modest.

As the process costs changed, ASICs seemed to fall out of favor. If you’d asked this same question five years ago, no one was doing custom designs. Then along came RISC-V. You can adapt processing, you can do things your way. It’s just the right fit. At the same time, Moore’s Law seems to be tapering off, which means processes don’t give you the advantage they once did. It costs a lot more and you get less for the buck. People are starting to realize that one-size-fits-all approach doesn’t quite work. You’re giving up margin in multiple areas. These designs have hundreds of cores. Why aren’t they all doing just what they want in the right space, at the right time, for the right thing? Doing the right job, power efficiently, area efficiently, on time is a no-brainer. Custom silicon is back and hardware is cool.

Kittrell: My definition of bespoke silicon is an application-specific integrated circuit, up-marketed as the Neiman Marcus of ASICs. From what I can tell, it’s a renaming of custom ASICs. Bespoke silicon has brought back the ASIC business. We had a lot of friends doing ASIC back-end place-and-route. We saw them nearly go down, like the IBM team. They were sold to GF, and then they were sold to Marvell. Now they’ve got so many projects that they don’t know what to do. A lot of what’s driving this is hyper-scaler computing. They’re making their own silicon, or customizing their own silicon because they want to replicate these cards into the machine and put 10,000 machines in a data center, put 10,000 data centers on the planet, so they can deliver value added software to their customers. It’s a change in economics that has brought the ASIC back.

Baines: I agree with all those definitions. All things are cycles, and the pendulum swings. Twenty years ago, ASICs were very ‘in’ and systems companies were vertically integrated and had their own silicon teams. Then there’s this dynamic called Makimoto’s Wave. The wave swings the other way, everyone moves to standard silicon, the buzzword is not ASIC. It’s COTS (commercial off-the-shelf) and everyone uses standard silicon. Now the pendulum has swung back, and hyper-scalars, data centers, automotive companies, 5G base station companies are all looking to do their own custom circuits. It’s the nature of the economics.

Swinnen: My take is a little different. Companies have made ASICs for a long time. But there’s a qualitative difference, too, when we speak about bespoke silicon. We feel there’s something different, and we’re trying to struggle with what that is. It’s not so much the technology as the perspective of the companies. Silicon used to be a chip, somewhere on the board, somewhere in the machine. It wasn’t central to the company’s business. But silicon today has become so powerful, so big, and so important that the nature and quality of that silicon moves the needle of entire business divisions. It’s at that point customers say they need to control the silicon to be able to control their business. That is driving bespoke silicon. It’s less a technology change than a business one, and it’s driven by things like 3D-IC and AI/ML, which have made these systems so powerful that entire business units stand or fall by the quality of the silicon. And hence, they take control of it.

Faisal: Today, billions of people are watching YouTube, What’sApping, Instagramming, and everything else. These workloads are very different compared to the things we were doing 10 years ago. Now, with that in mind, companies like Google and Netflix realized that even 1% improvement in silicon directly impacts their cost. One percent of Google’s cost is hundreds and hundreds of millions of dollars. You can build many teams to save that 1%, and that’s what’s going on. It’s full optimization, all the way to the end user behavior, then bringing it back down to how much energy is being consumed. You want to optimize the whole thing. Silicon is a really, really important piece of that.

Gupta: Tying back to the ASIC, everybody’s already defined bespoke silicon as custom hardware for specific, repetitive workloads. One example is the TPU. We all know about the Tensor Processing Unit from Google. Why did they decide to build bespoke silicon for the TPU? They were faced with a challenge, which is, they has a lot of machine learning going on in their data centers. They looked at the volume of, let’s say YouTube uploads. What’s going to happen in the next five years? If you do nothing, and just use general-purpose GPUs as accelerators, what does it mean to Google’s business? To multiple data centers, many more times over and above today’s capacity. How much is that going to cost? How much waste is in those data centers based on today’s general-purpose GPUs? When you start doing the math that way, I’m going to get perf per watt for general-purpose GPUs, but my perf per dollar doesn’t look so good anymore. Now, they put on their thinking cap and say, ‘I know these workloads. I can design better silicon — an accelerator that is designed for this specific workload.’ And in comes the GPU. Now you’re talking economics, back to the point that some of my panelists made. Microsoft’s holographic processing unit for the HoloLens is the same exact idea. General-purpose GPUs are not cost-effective.

SE: In the chip industry, we’ve talked for quite some time about software driving silicon, and driving all the applications. Now we’re now talking about focusing on the hardware to enable the software. How do you see the balance shifting between hardware and software?

Baines: Marc Andreessen was famously quoted as saying software will eat the world. What is less appreciated is that it’s only a partial quote. He actually said for the next decade, software will eat the world. And he said that in 2011. So to your point, yes, the software is essential and critical. But as others were saying, the workloads have changed. The tasks have changed. Hardware is changing to enable software to run more efficiently.

Faisal: For the whole of the electronics industry, thanks to software, our thought processes are changing. We’re designing fewer chips for consumers of chips. It’s more about designing chips that are going to make an impact on human behavior. Therefore, when you actually start talking about human behavior intelligence, now AI chips, how many AI chips can we make? Can I build an AI chip that’s going to do ‘YouTube Live’ versus ‘YouTube just upload’? Those latency differences are important because they’re directly connected to humans now. So it’s actually human behavior and intelligence and AI — all of that pressure is coming back to silicon right now, and that’s how we’re experiencing. I can’t think of a better time to be in this industry.

SE: There are a lot of overlapping interdependencies that we haven’t really seen before. Things are coming together in different ways. How does the bespoke approach, with different demands from the customer base, change how we do EDA development and the interaction with the customers? Who are your customers today and who are they going to be in 5 and 10 years?

Swinnen: A lot of bespoke silicon is coming from traditional system companies, or even software companies like Google and Facebook, and they have a few common characteristics. They’re new to EDA, so they buy complete suites from the get-go so they’re fresh in the sense they start from scratch. They are often cloud native from the get-go, not as an add on afterwards. Secondly, they have a lot of money, and their businesses are so dependent on this, as I mentioned, that they’re willing to throw a lot of money at solving this problem. They want the best, the fastest, the biggest. They’re somewhat hampered by the fact that they don’t necessarily have large experienced teams to implement this. They’re ramping up to that. We’ve always known in EDA that the top 20% of your customers drive the business, and they’re often the high-end users with big volume. And this sort of increases that. Bespoke silicon people have a lot of money, have a lot of demands, a lot of needs, and they’re driving EDA to bigger capacities. We were talking about chips or 3D-IC systems. A lot of them that are huge, and the reason they’re so important is because they’re so big and powerful. So it all feeds back on itself. They know they need to have high capacity to run this. They have high demands with real time analysis, and they are driving speed, capacity and capabilities in full. They want to use the latest process technologies. You see that the foundries are catering to these large customers by making customized versions of a silicon process specifically for some of the large customers. So yes, they’re definitely having an impact across the industry, driving more of what they need.

Gupta: I agree 100%. For the systems customers, who are the users? Today, they’re more systems architects than just chip designers. Five years ago, we were looking at chip architects who were faced with designing chips at reticle limits. How do you do that? It’s actually general-purpose GPUs and whatnot. Today what we see is systems architects who have a slightly different problem statement. Somebody is designing the hardware at the Googles and Amazons of the world. The systems architect’s job is to make sure the hardware lands on the fleet. If the hardware is just sitting there in the server rack but is not used, does it even exist? Now the systems architect is in charge of the software, too. So they are the ones who are driving today’s EDA in the sense that they’re designing the software stack — drivers, firmware. And then, within the next five years, they also will be looking at tools for things like debug, trace, and monitoring.

Baines: The point is that there are two sides to it. There’s the reason these new architectures exist, which is because there are new approaches and new software stacks, and that’s why you need a new architecture. But you can turn that around and say, ‘Because we’ve got a new architecture, we can therefore do this software, and do these analytics.’ And it’s not just because we use the chip to do the analytics for something else. We can say we can use analytics on the chip, and so there are ways of doing better debugging, ways of doing modeling. There’s AI in the design process, there’s AI in the verification, and it’s almost a circular economy. With these new chips, you need the better tools to design them. And when you’ve got the new chips, they enable better tools.

Swinnen: The GPU is an example of that. It originally was designed for graphics processing, and then they realized they could use it for other things.

McDermott: That’s one of the tradeoffs of this hardware/software co-development. You can’t design two things simultaneously because they’re interdependent. So there’s been this natural sequence where the latest-generation chip, the best test case you’ve got, the best bench case you’ve got, is last year’s software. There’s this sort of evolution. One innovates the other. the change that’s happened now with AI is these system companies have these free resources in the cloud. They’ve already bridged into multicore. We talked about multicore 10 or 15 years ago, going from two to hundreds of cores. So now they’ve got these algorithms running on a virtual platform in the cloud. They are fine-tuning with real datasets. They’ve got real use cases, but no silicon. So now this is the architects dream to go and engineer a chip to solve that problem. We’ve never had that before. Whenever we were coming up with an architecture for a new chip, we’d mock up these virtual platforms. ‘Give us a benchmark to ask them? Are you kidding me? It’s not going to work.’ Now we’ve got real datasets. They actually can run it on these virtual platforms. And it’s like hand in glove, which you can do this fine-tuning optimization. This systems view is changing the way we’re approaching silicon design.

Kittrell: Just the idea that what they’re doing is taking what was running in software, maybe power-hungry, maybe slower, and pushing it down to hardware, and it’s faster and it’s lower power. This has created a boom in the emulation FPGA prototyping market. These guys are buying huge systems because what they want to do is they get their architecture together. They want to run the real software to see what the power profile is going to be, then get that fine-tuned before they started doing the implementation. Emulation always has been a big part of doing advanced chips, but it’s an entry point must-have for one these teams now.

SE: Are the hyper-scalar companies driving the growth of prototyping and emulation?

Kittrell: In the past people would buy an emulation box and then guard it, and everybody would try to get onto it. Now they’re buying huge arrays at one time.

Faisal: Hardcore software engineers who are developing for the cloud and for these system companies, when they see these tools, are like, ‘Why is my licensing per processor, or this and that. They want everything on the cloud. ‘Just give me a user ID. I’ll do the rest.’ So Cadence has the cloud AWS partnership. That’s going to expand for all of EDA.

SE: How are different types of customers using cloud differently?

Kittrell: For cloud it’s been an interesting journey. For they hyper-scalers, whenever they’re using a computer, they’re on the cloud. It’s their native environment. The enticement for going on the cloud is the elastic compute. I can add more compute as needed. It is difficult to do a hybrid cloud solution for a lot of customers because there’s no getting around having to replicate the ‘lift and shift’ of your environment over into the cloud, and keep them in sync somehow in order to do this. There are a few applications that can spawn off jobs or send packets of workloads that are self-contained into the cloud and then return data back to the mothership. That’s an area that’s going to evolve. And some customers with a classic data center are intentionally scaling back their use of that, not buying new machines, and investing in new cloud resources so they will be 100% cloud in the end. They put the work up front in order to do the lift and shift, and the replication. But it’s just like if you opened up a data center in India and had a data center in the U.S. You need to connect them together and have them interoperate. It can be done, but it’s got to be intentional at a high level.

SE: Isn’t it true that for a lot of smaller companies it makes no sense to go to the cloud because it’s too expensive? It may be cheaper for them to maintain their own data centers.

Kittrell: There’s a tradeoff. If you’re using 70% of a machine’s time, it’s probably better to buy and to own. But then how much are you workload-limited? You’ve got only so much space.



Leave a Reply


(Note: This name will be displayed publicly)