Chiplet Momentum Builds, Despite Tradeoffs

Pre-characterized tiles can move Moore’s Law forward, but it’s not as easy as it looks.

popularity

Chip design is a series of tradeoffs. Some are technical, others are related to cost, competitive features or legal restrictions. But with the nascent ‘chiplet’ market, many of the established balance points are significantly altered, depending on market segments and ecosystem readiness.

Chiplets provide an alternative mechanism for integrating intellectual property (IP) blocks into a semiconductor device. An IP block contains a prepackaged function that has been through several stages of design and verification. Traditionally, IP blocks were designed or purchased, and then integrated onto a single monolithic chip, which was then mounted into a package. With chiplets, multiple discrete die, each containing a purchased piece of IP, are integrated within a package using package-level interconnect. This process often is referred to as 2.5D or 3D integration, heterogeneous integration, or systems-in-package.

There are several reasons why chip disintegration is being considered. “Chiplets make a lot of sense for semiconductor companies doing multiple tape-outs per year,” says Chris Jones, vice president of marketing for Codasip. “It is an effective method of reuse. Designing at a higher level of abstraction is a concept that chip architects have desired forever. Chiplets minimize risk and reduce time to market.”

Attempts have been made in the past, but success has been limited. “Around 2010, one of the first practical efforts to create a multi-chip system was an integration between application processors and digital baseband modems,” says Kurt Shuler, vice president of marketing for Arteris IP. “Companies that were getting rid of their digital baseband units wanted to have an interface to be able to connect the baseband through the application processor chip and share the DRAM. It was an all-digital interface, sharing the pins that already existed for the digital baseband to connect to the application processor so it could connect directly to the DRAM. Later, they used the MIPI low latency interface (LLI). This added MIPI MPHY plus a controller. One of the big problems with these low-level interfaces is that these two chips still have to be designed in context with each other.”

Today, there are new drivers, each of which may push the market in a different direction.

Chips getting too large
With Moore’s Law slowing for most applications, chips are running into size limits.

“Customers in certain applications, such as networking, computing and AI, want to build such massive chips that they have to partition their design because if they were to put everything onto one chip, it is bigger than the size of the reticle,” says Hugh Durdan, vice president for strategy and products at eSilicon. “If you are building a processor and the key value add is how many processor cores I can fit onto a chip, then anything that is on the chip that is not a processor core is detracting from that value. They can take other functions, such as interfaces, and move them to a separate chip and increase the area available in the reticle for their core function.”

The IP industry always has claimed that if something is not your core competence, you should not be wasting your time on it. “Only about 60% of the area of a domain-specific accelerator is actually domain-specific,” says Bapi Vinnakota, director for silicon architecture program management at Netronome, and Open Domain-Specific Architecture (ODSA) subproject lead for Open Compute Project (OCP). “40% of the die area and development cost goes to logic that is not particular to the domain, but is needed to make the product work. It goes into internal memory, into host and network interfaces. It goes into general-purpose CPUs, etc.”

Companies are looking to take risk out of their design. “The cost and time taken to get these chips done is a problem,” says Rishi Chugh, senior product management group director in the Cadence IP Group. “By splitting the die into two functions, if for any reason there is a design flaw or a fabrication flaw, you do not have to throw the whole die away. There have been several proof points for this approach. AMD used chiplets where they had multiple dies that were stitched together into a single package. They could differentiate their product line with 4-core, 8-core, and could address the PC to server markets just by using chiplets. Intel has adopted the same approach using their advanced interface bus (AIB). They took the FPGA and stitched that into their standard product lines using AIB. There was also a joint effort between Intel and AMD. They used the Intel mobile CPU and the AMD graphic module.”

There have been examples where adding memory in package provides significant value. “Domain-specific accelerators can often benefit from going to the advanced nodes, but they are also serving smaller markets, which means the economic justification is more challenging,” says Vinnakota. “Many designs have a mismatch between processing logic speed and I/O speed, making them put more memory on-chip. You end up with larger die, and those tend to be unviable economically.”

A reduction in design time can help with that. “Companies have added value by speeding the design cycle,” says Mick Posner, director of product marketing for DesignWare IP Subsystems at Synopsys. “This is because they are able to integrate a mostly proven chiplet. The proven side could be the SerDes interface to memory, but there are always tradeoffs. Chiplets add new interfaces and the final limitation is that you are sacrificing flexibility. They have always tailored their IP usage exactly to their needs, be it power, area, or performance. With a chiplet you sacrifice that flexibility, so you may end up with a chiplet that has way more features than you could possibly ever need, or you are forced to fit a square peg into a round hole and use one that doesn’t quite have the capabilities that you would like.”

Low volume
Another push for the introduction of chiplets are to support low volume designs. ” DARPA has kicked off the Common Heterogenous Integration and Intellectual Property Reuse Strategies (CHIPS) program to expressly explore chiplets and drive ecosystem development,” says David Harold, vice president of communications at Imagination Technologies.


Fig. 1: Chiplet model. Source: DARPA

This program aims to provide a portfolio of chiplets that can be used in a variety of designs. “Then low volume military applications do not have to deal with high NREs associated with advanced technology products,” says eSilicon’s Durdan. “Develop one product and sell into multiple applications. That is how suppliers get their return.”

Ironically, there could be a hidden downside to this program. “The chiplet also offers more opportunities for both security vulnerabilities and hidden hardware Trojans,” cautions Sven Beyer, product manager for design verification at OneSpin Solutions. “Integrators will expect the vendor to verify these aspects of integrity. The SoC team may wish to re-run some aspects of standalone IP verification as part of screening vendors and evaluating chiplets.”

New IP market
There is no market for off-the-shelf chiplets today. Nobody has created a portfolio of chiplets that can be sold as standard products. So today, customers have no choice but to design it themselves.

“A chiplet market would enable companies to provide a small, but unique, capability,” says Cadence’s Chugh. “Manufacturing costs are not high because you are making small dies. This will enable a lot of startups. This is an extension of the current IP market where you are now selling you IP as a chip.”

“If customers could purchase a chiplet off the shelf that had already been proven and hardened, that met their requirements, that would be a preferable situation to having to design the chiplet themselves,” says eSilicon’s Durdan. “That requires a level of expertise that is not their core competence. They would rather buy it if they could.”

That can make technologies available to companies at mush lower risk. “Chiplets can help less-sophisticated customers access leading-edge IP in a form that they can more easily digest,” says Imagination’s Harold. “This reduces the skills and experience required, which could be a barrier to entry in advanced SoC development.”

But there are some problems with this approach. “For a chiplet business to be profitable, the provider must be able to design a single chiplet that can be licensed multiple times to several customers,” says Codasip’s Jones. “Failing that, they must figure out a way to automate the process of reconfiguring chiplets to minimize engineering efforts. Otherwise, your chiplet business is simply a service that is difficult to scale. In addition, a customer is unlikely to pay more for an IP subsystem than he is for the individual components. They will not pay for the added value of the integration and testing, but instead will expect a discount since they are giving more IP business to a single supplier.”

Consider an example of a USB controller. “A chiplet is not just a piece of hard IP,” says Synopsys’ Posner. “It is a controller and Phy – it is a subsystem. “Normally, they would tailor a USB interface for their needs. Would it be a USB device, a host, or a dual-role device (DRD)? They can optimize the soft IP themselves without having to come to the IP vendor. They can tailor the size of the internal memory that exactly fits their needs. If we were to create a USB chiplet, we would harden a USB DRD because it covers all market. That may be good enough, but there is an area impact of that chiplet and may not be the most optimal chiplet. It solves that problem.”

Posner sees even more problems with memory interfaces. “You hardly ever see two customers with the same requirements. What is the bandwidth, what is the bus width, what devices you are connecting to – there is no one-size fits all. The final architecture is dictated by what you are going to connect to. So, you would be locking them down.”

In addition, there are non-technical issues that also need to be resolved. When someone buys IP, there are responsibilities that have been figured out between supplier and user. “We need to work out some of the mechanics and also a legal framework,” says Artieris’ Shuler. “When you deal with die from two different customers, and someone else is going to be responsible for putting them together in a package, who is responsible when a die gets broken? Who pays?”

While many of these issues may finish up being similar to the PCB industry, the risk levels are currently higher. “It is fragile, it takes work,” adds Shuler. “It is risky. People are risk averse. With the OSATs, some of the legal and logistical issues are being resolved.”

The migration of IP into chiplets may not help all vendors equally. “It enables us to differentiate based on our advanced capabilities,” admits Posner. “We get to leverage our significant R&D that we put into advanced nodes and advanced protocols and monetize it at all levels and not just in a handful of customers who adopt the advanced technologies. This extends the market for EDA companies.”

This provides benefits to all chiplet users. “Chiplets could be a smart way to dis-integrate conventional monolithic semiconductor chips,” says Harold. “For instance, a high-performance GPU could be implemented as a chiplet in 7nm, while a connectivity subsystem could be implemented with the digital IP on a 28nm chiplet, and the analog and RF IP on a 40nm chiplet.”

Posner agrees. “It provides access to advanced technologies in a reasonably priced node. An example may be a 5nm or 7nm advanced SerDes but the customer does not have the overhead of building a 5nm or 7nm SoC.”

Enabling a market
While chiplets enable dis-integration, there is also an integration justification for chiplets. “Traditionally there has been a four orders of magnitude difference between power and performance efficiency when comparing on-die and off-package interconnect,” says Netronome’s Vinnakota. “Recently, because of the proliferation of short reach SerDes technology and the proliferation of interfaces such as AIB or HBM, that gap has shrunk from four orders to less than one order of magnitude. If you stay on package, off-die communication is only 5X to 10X less efficient than on-die communication. That makes a huge difference. Now you have a reasonable shot at partitioning the functionality of a large product into a number of die and not paying a huge penalty in terms of power or performance.”

Costs are an important factor. “From a development cost point of view, if you are just buying a standard product, the development cost is near zero,” says Durdan. “If you are developing that chiplet yourself, you have to license all of the IP and you have to pay for a mask set, which is very expensive. It is not an easy or inexpensive problem for the customer.”

But there is a large cost increase in packaging. “People are working on it,” notes Durdan. “Intel has their proprietary EMIB (embedded multi-die interconnect bridge) technology for connecting die together. From a cost-structure perspective that is more attractive than using a traditional silicon interposer. If something like that were available to the broader market, and the cost impact of putting multiple die in a package using high-density interconnect came down, it would be a big key to unlock the market.”

Some things have to happen before this becomes a reality. “Three things have to take place,” says Chugh. “First, the interface between the two organizations should be an industry standard. Second, how do you stitch them in terms of protocol for your design? There are various NoC companies that have the tools and the technologies to stitch these dies together and provide flow control. Third, the fabs need packaging technologies that are agnostic toward process and design.”

There are also some interesting possibilities to increase yield. “If I want to send data from one die to the other and I know there is a probability there might be some damage during fabrication, then I can have redundant paths where I can provision such that if this doesn’t work. I have a backup,” says Chugh. “This can help bring the yield up. The penalty paid for yield is higher than the penalty for the extra area taken up by the redundant path.”

Yield is so important that for some companies it may tip the balance. AMD claims that producing a four-die solution is cheaper than a single large die. This is related to higher yields.

Industry readiness
The industry believes that tooling is already in place to make this a possibility. “Most of the issues have already been solved with HBM and everything is in place,” says Durdan. “Similarly, from a package-design perspective, the design flow already comprehends multiple die on an interposer and making sure the connections are all correct and connected to the package substrate. We also have tools to do the thermal and mechanical modeling of the package. HBM pipe-cleaned the needs from an EDA point of view. Now other chiplets can leverage that.”

Others agree. “There is nothing fundamentally different about verifying chiplets than other forms of IP, but their larger size and increased complexity amplifies the need for verification of design integrity in all its dimensions—functional correctness, safety, security, and trust,” says OneSpin’s Beyer. “The SoC team integrating a chiplet is placing a greater percentage of its chip in the hands of the IP vendor, so will have even higher expectations of thorough verification to all functional specifications.”

Tools are evolving to make this easier. “We keep adding new features to make things easier, but all of it can be done with what exists in the tools today,” says John Ferguson, marketing director for Calibre DRC at Mentor, a Siemens Business. “I do not see any big changes. The infrastructure to support the industry has improved but there is still more to be done before they get confident that they can design and build these things without failure. You need to have some noted successes where someone has done it and can design something that the rest of the industry can’t. Then everyone will jump on it.”

So how much is happening today? “It will probably happen first with companies that agree to collaborate in joint programs, probably using existing designs so that they can openly share the necessary information,” says Shuler. “Intel and AMD lawyers probably had to spend a lot of quality time together before they started on their venture.”

There is movement behind closed doors, too. “We have seen a lot of movement in the past 12 months,” says Ferguson. “A lot more people are designing in this space, a shift from previous work where people were kicking the tires and they were designing stuff, but never manufacturing. Now it is becoming more prevalent in real designs. I have not seen them adopt the concept of bringing in pre-characterized dies from a third party yet, but that is just a natural evolution.”



Leave a Reply


(Note: This name will be displayed publicly)