IP Becoming More Complex, More Costly

The IP industry is undergoing several transformations that will make it difficult for new companies to enter the market, and more expensive for those that remain.

popularity

Success in the semiconductor intellectual property (IP) market requires more than a good bit of RTL. New advances mandate a complete design, implementation, and verification team, which limits the number of companies competing in this market.

What constitutes an IP block has changed significantly since the concept was first introduced in the 1990s. What was initially just a piece of RTL (register transfer level), delivered as-is, has evolved to include complex functionality, often involving hardware and software, analog and digital, verification suites, synthesis scripts, and much more. What many didn’t realize initially was that it was not the design-once/use-everywhere paradigm as promised, because each of the ‘everywheres’ was slightly different.

As the industry migrates to the next level of complexity with chiplets, even more models, deliverables, and collateral will be required, especially as the IP and chiplets become more opaque. They almost certainly will be required to have manufactured those chiplets and make them available for evaluation, which also will require design of some form of substrate or interposer.

It takes increasing amounts of trust between developer and integrator. “The behavior of early IPs was very focused, a single fixed function,” says Raymond Nijssen, vice president of technology at Achronix. “That was relatively simple and relatively well-understood. Fast forward to where we are now. You get these huge deliverables. The trend is that IP blocks continue to get larger and more complex, and also more black box-ish. Black box means that as the thing grows, your interaction with it is limited to the outer shell, plus maybe a little bit into it, but you have less and less knowledge about what’s going on deep inside that IP. That is a big paradigm shift. I’m facing a situation where I know less and less of what’s inside, and I have to be more and more reliant on my IP vendor to verify it.”

In the past, it could be argued that the integrator could have developed the IP themselves. “One of the primary reasons a customer would buy an IP, in cases where they have the expertise to develop the same functionality, is the promise of accelerated time-to-market,” says Arif Khan, product marketing group director for interface IP at Cadence. “In order to do that, they expect good solutions that make integration and verification of the IP as easy as possible.”

That is not as straightforward as it sounds. “Successfully designing and delivering an IP block requires a purposeful approach from the start with a plan to deliver the same exact IP core to dozens of customers,” says Dhanendra Jani, vice president of engineering at Quadric. “The core tenet of the IP industry is reuse. That means repeatedly delivering the same block over and over again, which means planning for a design element to be used in different system architectures, in different market segments, under different conditions.”

That is where some companies go wrong. “An old adage in the IP business is, ‘It’s not really IP until you’ve delivered it 10 times,'” says Steve Roddy, CMO at Quadric. “That colloquial saying embodies the reality that robust verification, modeling, packaging, documentation, and design-in support of commercial grade IP is far more complex than getting a design block ready for tape-out in one SoC design within your own team. Too often we see semiconductor startups that fail with their initial chip design, and which suddenly ‘pivot’ to IP licensing. Those companies then struggle to deliver and support a licensing customer, because that initial failed chip design never contemplated a different SoC use case, or process technology, or system environment. The corollary to the 10X rule is that the profits to the IP vendor only start flowing after that tenth licensee.”

But it is starting to become more difficult. “As engineering will be engineering, you can’t wait and sit back for the IPs to mature,” says Achronix’s Nijssen. “By that time you would not be able to sell your products at a premium. If you are not going to wait for the latest version of PCIe to be mature, the best you could do is use PCI Gen 3 or Gen 4. Then your customers wouldn’t be buying your products anymore. You have to be at the bleeding edge and you have to accept that there will be bugs in that IP, or errata, or spec changes, or maybe there’s an integration problem where something wasn’t well understood when you integrate it with some other IP.”

More than functionality
As the IP blocks get bigger and more opaque, it becomes less acceptable for the integrator to perform back-end tasks for that block. One such example involves test interfaces (see figure 1), which now need to be integrated into the system.

“Customers expect a fully integrated controller and PHY subsystems, with support for testability and high-volume manufacturing test,” says Cadence’s Khan. “With growth in IP complexity and associated physical size growth, IP users want a delivery that simplifies integration. Even though PHY development and protocol controller development are traditionally done by different teams, because the expertise required for each discipline is different, a combined delivery demonstrates all ingredients working together in sync, eliminating areas for errors during integration at the user site. Production quality test vectors are expected by default to guarantee efficient high-volume manufacturing.”

Fig. 1: Advanced Integration and Test. Source: Cadence

Fig. 1: Advanced Integration and Test. Source: Cadence

This extends to other interfaces, such as monitoring and debug interfaces. Monitoring may include thermal sensors necessary to keep a die operating within defined parameters. As the industry migrates toward chiplets, some of the standards also must advance.

“There is a new standard IEEE 1838 that piggybacks on 1149.1, the TAP interface, which is a serial interface,” says Vidya Neerkundar, product manager for Tessent products at Siemens EDA. “It defines a primary tap and a secondary tap. If you want to stack dies on top of each other, then the secondary tap will talk to the primary tap on the next die.”

Even IP that has not been hardened can become intimately involved in the entire development flow. “A NoC configurator looks at what you need to connect and what level of performance you’re expecting, plus a high-level floor plan defining where your initiators and targets are going to be placed in your subsystem or your SoC,” says Guillaume Boillet, senior director of product management and strategic marketing at Arteris IP. “We have to elevate the flow so the architecture can concisely express the need and explore different alternatives. It is expanding cross-domain and expanding toward the back-end, as well. At the latest technology node, you cannot just hope your topology is going to be fine.”

Embeddable FPGAs and hardened blocks have similar problems. “It’s very difficult to have something on the shelf to satisfy multiple users, because each of them may require a different sizing because of the floor plan,” says Nijssen. “Or they may want more resources, or less, or the way they use it is different. There are many use models, and that will translate to the power modeling. If I am running them at different frequencies, there are different things that I have to worry about from a power and thermal point of view.”

Increasingly, design and implementation are becoming workload-specific. “We provide an environment where the customer can express, in a very concise format, the characteristic of the traffic from the different initiators, and along with a SystemC model, we are able to simulate and show where the bottlenecks may be,” says Arteris’ Boillet. “The customer can augment that with their own workloads. This is even more important when you start looking at a non-coherent NoC, where the expectation is to have enough bandwidth for the things to talk, and also when you’re using coherent interconnect to assess the right configuration of your caches, of your dimensioning, of the different snooping capabilities, etc. For this, you need to have a very accurate view of your workload.”

Models are the way in which internal characteristics have to be reflected to the outside. “Models been around for a long time, but increasingly models are multi-physics models and address novel physical effects,” says Marc Swinnen, director of product marketing at Ansys. “It’s not just timing and performance, but thermal effects, power effects, signal integrity effects due to low-frequency interactions through the power supply, security aspects, and so on. Signal integrity models are electromagnetic in nature, especially for high-frequency interconnect, so if you’re going to do heterogeneous integration with RF chips along with digital, you’re going to need electromagnetic modeling for those lines. Even the digital lines on an interposer have to be modeled electromagnetically because they are very long compared to chip lines. Though just a few millimeters long, they are effectively transmission lines and need to be modeled as such.”

Extending to chiplets
Everything from the IP world carries through to chiplets, but with a whole bunch of additions. “Just like you had to plan for different chips that were mounted on a board, now you have to plan on integrating different chiplets,” says Siemens’ Neerkundar. “Today the industry is only seeing vertical integration, meaning the chiplets are all created within a single company. When a single company owns all these chiplet they can communicate with each other, and they can figure out a handshake mechanism so the specs are well-defined. In the future, where chiplets can be bought from Vendor A for one chiplet, and Vendor B for another chiplet, as an integrator they need to have a common protocol. There are some standards that are emerging. UCIe talks about both protocol and test, and it integrates the interconnect between them. It also integrates how you train the system for transmit and receive that goes between the chiplets.”

Some of the standards are emerging. “We need to keep an eye on the new protocols that are emerging, and we need to satisfy the need for compliance with the new protocols,” says Boillet. “Whether it’s CXL, or CHI, because that’s what we’re going to present to the PHY controllers and the PHY in the end. That’s the extent of what we need to do at the first level. But it can become a lot more complicated when you start dealing with a symmetric multiprocessor system, where the expectation is that the different chiplets work hand-in-hand and have a full integration of coherency. In this case our IP needs to evolve so that we have a notion that we call hierarchical coherency, where you can configure and do snooping from one chiplet to the next, and vice versa. When you push the envelope and you want to enable customers to do this kind of thing, there are some expectations.”

But the standards do not cover all of the needs. “UCIe is a first step in that direction, in that it defines the physical interface,” says Ansys’ Swinnen. “What is also required are things like thermal and physical models. It will have to include a chip power model (CPM). For example, the industry is familiar with high-frequency voltage drop, which is due to the local switching, but you also have low-frequency voltage drop. If one block, or chiplet, becomes active and draws a lot of power, and then it turns off and another chiplet becomes active, you can set up resonances between these blocks. We’re talking about 100 hertz — low-frequency resonances, where the voltage goes up and down in a slow wave. That is not captured by high-frequency analysis and has to be done by more globally looking at the chip. We can model each of these chips, and if you place them together, see if the frequencies or resonances occur.”

Nijssen concurs. “Maybe someone wants information that allows me to investigate package resonance. You need to have to have CPM models, and this is very context-dependent. You cannot provide one model that has all the details that are required to answer the questions, because you need to know the use model. What frequencies are you going to be running at? How many channels are you running?”

Verification challenge
The biggest hurdle to the IP paradigm always has been verification. “Comprehensive verification, taking into consideration all possible system use cases, is the IP vendor’s responsibility,” says Quadric’s Jani. “The integrator should need only to validate the proper interconnection of the IP within the system, not re-verify the IP in total. To enable that, the IP provider should deliver integration tests and assertion checkers that can be re-used in customer’s SoC testbench. The IP provider should also provide a reference testbench that demonstrates typical use models in RTL simulations. Supporting gate and power simulations in this testbench can allow the customers to quickly take the IP through physical implementation with their choice of tool flows, third-partly libraries, and operating conditions — thereby enabling quick productization.”

IP companies must keep innovating in their verification flows. “Verification IP is developed in parallel with actual design collateral,” says Cadence’s Khan. “To build confidence in the design, we have augmented our design flows to include newer approaches in both pre- and post-silicon phases of the development. This includes increased usage of formal verification methodologies, emulation platforms and co-simulation, and real-world silicon testing. We now develop test chips that contain entire subsystems, and build platforms that allow customers to evaluate the IP in real-world situations under traffic stress, while exercising boundary conditions repeatedly to ensure that the IP will perform as expected. We have systems labs that perform these real-world tests, and we make our evaluation platforms available to customers.”

Testbench integration also has to be considered. “Typically, an IP block comes with its own standalone verification environment that needs to be integrated into the SoC environment,” says Ravi Thummarukudy, CEO for Mobiveil. “It’s possible that different IP vendors will use different verification IP, although it is common to use UVM for IP-level verification and Python or C++ for SoC-level verification. Porting a subset of the IP-level UVM environment to the C level is quite tedious.”

For highly configurable IP, all necessary testbenches have to be generated automatically. “Our methodology leads to the generation of a test bench that correspond to the NoC as you configured it,” says Boillet. “The output of our generator is not just RTL. It’s modeling, it’s the verification environment, it’s documentation, it’s pieces of software — everything we can provide and that we can derive from the configuration of the NoC.”

But what happens if the IP is modified after delivery? “In RISC-V, almost every customer wants to customize something or extend it,” says Simon Davidmann, founder and CEO of Imperas Software. “If you’re licensing IP from an Andes, or Codasip, or SiFive, and then change it, you really have to re-verify it. That creates a new problem, and means they need to ship a really sophisticated verification environment with it. How do you verify something you modify? The nature of extensibility changes the business models. A verification environment has to be a fundamental part of the IP delivery.”

Conclusion
Success in the IP world is not just about having the best design. It is about having the best design that is easy to integrate and test. That requires an increasing array of tools and models, and increasingly requires the IP developer to become a chip company, even if they never sell their hardened IP directly in the marketplace.

As IP blocks become bigger and more complex, some degree of opaqueness is unavoidable. But to make that work, increasing levels of trust are required between supplier and integrator. That will make it a lot more difficult for new IP companies to establish themselves.



Leave a Reply


(Note: This name will be displayed publicly)