Chiplet IP Standards Are Just The Beginning

Data and protocol interoperability standards are needed for EDA tools, and there are more hurdles ahead. Customized chiplets will be required for AI applications.

popularity

Experts at the Table: Semiconductor Engineering sat down to talk about chiplet standards, interoperability, and the need for highly customized AI chiplets, with Frank Schirrmeister, vice president solutions and business development at Arteris; Mayank Bhatnagar, product marketing director in the Silicon Solutions Group at Cadence; Paul Karazuba, vice president of marketing at Expedera; Stephen Slater, EDA product management/integrating manager at Keysight; Kevin Rinebold, account technology manager for advanced packaging solutions at Siemens EDA; and Mick Posner, vice president of product management for high-performance computing IP solutions at Synopsys. What follows are excerpts of that discussion. To view part one of this discussion, click here.

Experts at the Table: Semiconductor Engineering sat down to talk about the challenges of establishing a commercial chiplet ecosystem with Frank Schirrmeister, vice president solutions and business development at Arteris; Mayank Bhatnagar, product marketing director in the Silicon Solutions Group at Cadence; Paul Karazuba, vice president of marketing at Expedera; Stephen Slater, EDA product management/integrating manager at Keysight; Kevin Rinebold, account technology manager for advanced packaging solutions at Siemens EDA; and Mick Posner, vice president of product management for high-performance computing IP solutions at Synopsys. What follows are excerpts of that discussion.
L-R: Arteris’ Schirrmeister; Cadence’s Bhatnagar; Expedera’s Karazuba; Keysight’s Slater; Siemens EDA’s Rinebold; and Synopsys’ Posner.

SE: Are there any chiplet standards for simulation and verification?

Posner: In the EDA tool space, there’s a lot that is being done and lots that could be done. For example, 3Dblox from TSMC is one way to start standardizing connectivity and sharing of designs. However, in an open ecosystem it needs to be more than just TSMC. It’s got to adapt to Samsung and the Intel Foundry, so there is absolutely more work needed in the area of standardization when you’re thinking about mixing and matching chiplets from multiple foundries.

Schirrmeister: There are tools to enable that from a standardization point of view. But in the front end, if I remember my emulation days, we had people dealing with the chiplet aspects in emulation. When memory was disaggregated or was connected in a 3D-IC fashion, you would simulate such things as the impact on delays, and that extends further to things like power. When you talk to companies in what we refer to as a proprietary ecosystem — whether it’s AMD, or Intel, or whoever controls all sides of the equation — then suddenly the software aspects become interesting issues in the chiplet world, as well. Let’s say you create nine different power versions, where you have different speed grades for the chiplets as they were binned. Now you have these systems of chiplets, and you have to make sure they don’t burn up based on your power control, that you actually have the right chiplets done in the right order and with the right power settings. That’s purely in the hands of software now. I had a user saying that’s a totally new case for virtual prototyping, for instance, because they have not done it like that before. So there is room for new tools. And that’s why, when you look into things like the imec initiative that is driving an automotive micro-ecosystem, you have everybody there — from technology vendors to EDA vendors in the classic tool sense, and EDA vendors in the IP sense as IP providers — and the end users, the Tier Ones, and the OEMs who need to drive that. Those micro-ecosystems will be very important to standardize and provide recipes and reference implementations, for instance. This will be an interesting path to adoption.

Rinebold: 3Dblox is going to be compelling. It’s a TSMC initiative being pushed out there right now. The other one that’s getting a little bit of traction is the CDXML format being put forward by the CDX working group, which is part of Open Compute. The idea here is that you could have different views of the data sets. You’re going to have a physical view, an electrical view, a thermal view, a behavioral view, so whoever’s consuming this information can consume the data in a standardized format that ideally has some broad market support. What’s encouraging about this is there are other EDA vendors involved here. We’re not the only ones. There are semiconductor companies involved in this, including Intel, and there are larger system companies, so it does seem to be getting some fairly broad support and a higher level of interest.

SE: Will chiplet interoperability vary across different market segments, both from the technology and the business side?

Slater: There’s one part of the industry that maybe is not jumping after this quite as quickly as the others, and that’s RF and microwave. If you’re talking about an RF IC in CMOS today, then it’s possible that could be moved inside the package. But RF and microwave tend to generate a lot of heat, particularly power amplifiers, which may be a reason not to bring it inside. Most of the EDA tools support RF and microwave and millimeter wave. It’s all III-V materials, specialized processes, the package itself, and all the parasitics of the package become something the circuits are tuned around. I don’t see that changing anytime soon. Those things that are going to be high-power or high-efficiency communications chips still will be outside in a traditional multi-chip module.

Bhatnagar: The interoperability also depends on who is implementing the chiplet, or who is using it. For example, the Tier One customers — the really big ones with deep pockets — care less about interoperability. They have big engineering teams to handle issues. But smaller companies that are maybe making one or two chiplets, which is their capacity, are very much interested in interoperability for two reasons. One is that whatever they make, they want to be able to sell to a large number of people. Interoperability kind of guarantees that. Secondly, they do not have the bandwidth to be debugging issues and figuring out issues as they get the chiplets back. So interoperability it is definitely important. Without it, no ecosystem will exist. But it also very much depends on the power a company has in terms of manpower and financial power. I’ve seen more interest in in profitability from smaller players than bigger players.

Rinebold: With respect to the co-design piece of this, interoperability in the early stages — when we’re formulating that floorplan for some type of heterogeneous chiplet system, being able to consume data from the different data sources such as the silicon tools, the packaging tools, maybe even board tools — have some influence. So being able to consume that in industry standard formats and formulate a floorplan that takes into account thermal characteristics, power characteristics, is important. For example, if you have a processor that’s running at 100° C and you have memory that’s at 80° C, we don’t want those two next to one another because of thermal coupling or thermal shadowing. We want to be able to rectify that at the early stages, where it’s easy and relatively effective to make those types of changes. This is why co-design is one of the key pieces in the broader discussion of interoperability.

Schirrmeister: Going back to the application domains, it depends on the volume. The basic economics of semiconductors haven’t gone away, and if your primary end device is ‘only’ 100 million units a year, like it is in automotive right now, then you really need to get the economies of scale. You need to do some sharing there between people, because that’s a whole market in terms of the actual end devices your design goes into. That’s also why some proprietary developments of chiplets happened. They had enough volume to actually make it happen. That will drive some of the aspects around the interoperability, and that’s why you’d be seeing these micro-ecosystems forming, like the imec thing, or some of the bigger vendors trying to push their own thing. Even there, there’s a level of co-opetition going on. ‘Of course I will compete with other automotive OEMs on this, but I also understand that I don’t get to economies of scale if we don’t all agree on some basic interactions, like UCIe, like the back protocols, coherent hub interface (CHI), chip to chip (C2C), to transmit the data.’ That’s a very important driver for all of this.

SE: Where is the interest today in terms of adoption of AI in chiplets ?

Karazuba: Interest in AI chiplets is robust, specifically because general-purpose AI processors are not ideal for most implementations. The goal is being able to take a chiplet that is perhaps more geared toward an RNN or a CNN or an LLM, or any of the other three letter acronyms that we like to use in the AI world, because if you have a chip that is more geared toward what your application is, you’re going to get better utilization. You’re going to get a better power profile. You’re going to get better performance. It’s the advantages of chiplets that we can all repeat over and over. In AI, there’s a specific desire for that because of the delta in performance of a general-purpose versus a more optimized engine.

Posner: We should remind everybody that interoperability is a journey. It is not a milestone. And there’s a very broad scope of what that means — virtual simulated silicon. When you think about interoperability, think about the key risks. What are the key risks trying to be solved immediately? That’s why you see a huge focus on silicon. To adopt this new technology, you want silicon proof points. So we are seeing more of a push in the space of interoperability coming out of the silicon. It seems kind of backwards, but if you think about it from a risk perspective, what is the most important thing? The most important thing is once I’ve put this die down and connected it to a second die, I want it to work. Silicon interoperability seems to be getting the most airtime, but that’s not to say you’ve forgotten you’re going to need verification interoperability, virtual interoperability, as well as packaging and manufacturing interoperability. It’s a journey that’s just started with many milestones along the way. Interoperability depends very much on who’s looking for it. And building on what Paul said, because AI is so optimized, and every chiplet is so customized, you will not be looking for AI chiplets in a set of generic chiplets. So in that sense, interoperability definitely has a role to play, but it may be less important since you’re actually designing every chip for your SiP for the AI.


Fig. 1: Conceptual 3D-IC using chiplets. Source: Synopsys

Related Stories
Commercial Chiplet Ecosystem May Be A Decade Away
Technology and business hurdles must be addressed before widespread adoption.
Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.
Proprietary Vs. Commercial Chiplets
Who wins, who loses, and where are the big challenges for multi-vendor heterogeneous integration.



2 comments

Dr. Dev Gupta says:

We have been assembling systems out of individually packaged Chips for ages. The I/O protocols have conformed w/ one another over a wide range of Chip vendors. What is so different about integrating / assembling Chiplets, at least so far as electrical design is concerned ? Can the EDA Co.s unite to establish a Foundry agnostic set of generic standards ?

Erik Jan Marinissen says:

There is also:
IEEE Std P3405, a new standard-under-development for chiplet interconnect test and repair.

Kind regards,
Erik Jan Marinissen

Leave a Reply


(Note: This name will be displayed publicly)