IP Industry Transformation

Design IP has played a pivotal role in the creation of today’s complex SoC, but that role keeps changing. Each change places new demands on IP suppliers.

popularity

The design IP industry is developing an assortment of new options and licensing schemes that could affect everything from how semiconductor companies collaborate to how ICs are designed, packaged, and brought to market.

The IP market already has witnessed a sweeping shift from a “design once, use everywhere” approach, to an “architect once, customize everywhere” model, in which IP is highly configurable and customizable and the focus is on domain-specific optimization. But as chips become increasingly complex, and as new types of IP and licensing models continue to gain ground — especially on the processor side with RISC-V — more changes are coming. The big question now is which of those changes will make a lasting impact, and who will be the winners and losers as a result.

Underlying all of this activity in the IP space, an increasing number of chip designs either have hit, or will hit, the reticle limit. So instead of packing more features and functions into a single SoC, chipmakers are breaking them apart into smaller components. The focus is now on disaggregation, and that makes the notion of standalone chiplets a real possibility. But whether that create a new market for pre-fabricated, off-the-shelf IP, or whether it conflicts with the requirement for increasing levels of optimization, is yet to be determined.

“IP providers are incentivized to create more and more value for their customers, and not just continue in the old mold of delivering verified RTL or hard macros at the latest process geometry,” said Richard Oxland, product manager for the Tessent Division of Siemens EDA.

That demands increasing levels of innovation. “Whenever there’s either great competition, or a little bit of market uncertainty, people try to engineer their way out through innovation,” says Michal Siwinski, CMO at Arteris IP. “Everybody wants cheaper, faster, better, and IP plays a huge role in that. You have very configurable building blocks that are ready to go and are able to save customers amazing amount of time, money, and effort, so they can utilize their precious resources on their unique secret sauce.”

IP completeness and configurability
When the IP market was first conceived, the blocks were fairly limited, with a small amount of domain knowledge required to build them. That has grown to the point where the emergence of a new standard often means so much complexity that it is impossible to deliver fully capable IP on day one. In addition, the demands on the IP block differ, depending on the end-market in which it will be used.

Consider the evolution of the PCI Express standard. “PCI Express gen 6 is magnitudes more complex than the previous generations,” says Mick Posner, director of product marketing at Synopsys. “We have become very focused on when these protocols will be adopted and in what applications they will be used. We have teams dedicated to looking through the entire IP portfolio to determine when we need to start ramping effort to ensure delivery in a particular timeframe. That generates a set of key features required to target those first adopters without having to over-build the IP. While that does spread out our development timeline, the team sizes are very similar, because the development is broken up into chunks. Customization is a resource challenge and has required us to grow our IP service team to meet the needs of these demanding customers, either on the controller side, or more predominantly customizations to the PHY.”

This can create significant organizational challenges. “The cutting edge is to borrow best practices from another industry, and in this case it’s the manufacturing industry,” says Siemens’ Oxland. “The concept of product lifecycle management is well understood in the development of highly complex, highly customizable products, such as vehicles or industrial machinery. In the context of silicon IP, providers must track market requirements through to specification and verification requirements, as well as managing updates and end-of-life. Compliance to functional safety standards adds extra tracking documentation requirements, and the emerging field of trusted electronics supply chain will also impose requirements — initially for government and mil-aero applications, but over time this will expand into other applications. All of these must be documented and be easily accessible for reference.”

IP developers have tried to restrict the number of product variants that they need to maintain, and that concept is also being adopted in open-source development. “In the RISC-V reference manual, there are hundreds of options and choices,” says Simon Davidmann, founder and CEO of Imperas Software. “Some of the options are simple, such as, has it got hardware math on it? Has it got vector processing? Has it got bit manipulation? Then there are other things, such as determining priorities during an interrupt, which is a little more subtle. RISC-V is still working to try and narrow down the options so the spec can have everything. But you can adhere to a certain set of options, like a profile for embedded. In the IP space, there are so many options you can have around these processors that we need to narrow them down so they can be verified in a measurable way, and people can feel confident that the configuration they’re choosing has been verified. We will see the definition of profiles come out of RISC-V International later this year. This is one of the big challenges, and it is being tackled publicly within OpenHW.”

Posner agrees with this approach. “We have what we call prime profiles. That creates a starting point for the customer. It allows us to roll out a set of configurations of the IP, more specifically, with a cross-combination of features enabled and disabled. Those prime profiles fundamentally act as a jumpstart for the customer. We offer a selection of prime profiles, the customer selects one of those, and they are ready to go. There are no configuration steps, but there is flexibility. The customer still gets to change FIFO depth or turn off capabilities. What they don’t get to do is turn on capabilities that may not have been cross-verified. This fundamentally is a reflection of our internal release and versioning. We can control exactly what the customers see. In a scenario where you have multiple customers asking for the same set of capabilities, they can be very easily packaged as a prime profile. That may not be deployed across all of the customers.”

There are other ways to deal with this issue. “Our IP is designed with pre-configured, pre-validated building blocks,” says Arteris’ Siwinski. “I’m not going to go as far as saying this is correct by design, because it’s hard to say if that’s really true. Obviously, everything has to be verified and validated. And we take steps from the beginning to ensure that we don’t have to generate so many different versions that everything has to be tested, because that would be really difficult for our customers. Each of the building blocks that make up a more complex piece of a configured system is very thoroughly validated.”

Analog circuitry adds other challenges. “Architects want a high level of configurability,” says Aakash Jani, marketing and brand growth at Movellus. “More IP companies, whether you’re buying anything from a PLL to an ARM core, need to start delivering soft IP that is optimizable and configurable. Analog IP is special in that they require a lot of hardening services, and this is going to force the IP market to start looking at more digital solutions for those traditionally analog blocks so they can deliver that soft level of configurability.”

Infrastructure IP
A new level of IP is being created that sits underneath the design. Often called infrastructure IP, it does not add to the functionality. But it may help with building a more optimized implementation, or perform some other function that is independent of that required for the use case. “Many of the new design companies lack depth and wealth of experience in their physical design teams that would come with being a mega corporation,” says Movellus’ Jani. “This is where a secondary market of infrastructure IP could really start to blossom.”

One of the earliest examples was the network on chip (NoC). “Over time, the number of agents in a system increased,” says Michael Frank, fellow and system architect at Arteris IP. “At some point you realize that if they all need to communicate, it creates a lot of wiring. This is especially true if you have a centralized arbiter that allows one agent to access the bus at a time, because it creates too many cycle conflicts, and latencies go through the roof. It means you need more communication paths, but in a more intelligent manner. You may need point-to-point communication between certain agents, or one to many for others, or many to one for memory. So having this paradigm of a network provides an optimized implementation that has the necessary flexibility.”

There are potentially multiple levels with that. “The NoC solves the data flow issue,” says Jani. “But a NoC is only as good as the clock network that supplies it. Every time you hit a repeater flop, or a synchronizer flop, it adds at least a cycle of latency. If you can move data without those points of inefficiency, then you can improve your data movement, you can improve your multicore performance, your throughput, and do it with less area and power.”

This may quickly become important in the AI space where each company is attempting to stand out from the others. “As we start to see AI companies clump together in terms of inferences per second per watt, differentiation will become harder and harder,” adds Jani. “We believe that gains can be made by looking at the core infrastructure underneath the optimization, such as the clock network, or the interconnect, when you are stitching together multiple domain specific accelerators. The optimization of their architecture through things like intelligent clock networks, or new clock architectures, or new NoC architectures, can provide an extra 10% to 20% inferences per second per watt.”

One area that has been seeing increasing adoption is on-chip monitoring. “This is a new class of IP, broadly intended to provide visibility into the operation of chips at different levels of complexity,” says Oxland. “It doesn’t contribute to the intended function of the system. Instead, it monitors key operational metrics, from structural integrity, to voltage and temperature, to path delays, and all the way up to higher level metrics such as bus latency and bandwidth statistics. As system complexity increases, this class of IP will become a staple in every IC, and we will see more and more innovation around silicon lifecycle management.”

IP as a service
In the software industry, open source led to a rapid increase in the number of software-as-a-service (SaaS) companies. This may be happening in the open-source hardware space, as well. “Open-source processor IP has brought revolutionary change across the IP industry,” says Bipul Talukdar, director of application engineering for SmartDV. “However, it is not as straightforward as it seems to pick an available RISC-V processor IP and integrate that into a design flow. There are challenges when it comes to taking care of the necessary plumbing, such as firmware, toolchains, and various integration aspects. This has brought forward a new business model based on open-source processor productization. This can be called ‘open-source processor productization as a service,’ which helps with productizing and benchmarking available processor cores for specific applications, and also extends the service to offer any co-processor or custom processor-based acceleration needs for specialized applications.”

The industry is trying a number of approaches. “Within the OpenHW Group, members gel around a core type of architecture,” says Imperas’ Davidmann. “The first was a 32-bit RISC-V with integer math and compressed instructions. It had no floating point and no virtual memory. It was basically a four-stage pipelined embedded core. Today, there are about five other cores that have evolved from that. Each has different features and capabilities, and each of them has different companies contributing to the work. One of the very interesting things is they took the core and added an accelerator interface on the back of it, so it allows you to add your own external RTL. That sits on the bus of the processor through a well-defined interface. It means you can take IP that’s taken 10 person-years of effort to collectively build and develop the RTL. You can download it and use it and add your own secret sauce with accelerators for your different application areas. If you were to do that by adding instructions, you break all the verification. You don’t really want to touch the core. You add your own proprietary stuff around the edge in a non-damaging way.”

Some companies may need more flexibility. “The addition of new instructions can have a dramatic impact on PPA,” says Zdeněk Přikryl, CTO for Codasip. “It can improve performance, or it may reduce memory footprint. But you need tools to help you do this. You describe the processor, including its ISA and microarchitecture, in a high-level language, which is fed into the tool suite. It generates the SDK, RTL and verification tools. The tools also help you find the right place for the optimization. The profiler gives you hints, such as instruction fusion. It identifies hotspots that can, or perhaps should be, optimized. A designer can do fast design space iterations, because all they need to do is to change the processor description, and the outputs are regenerated and ready for the next round of performance evaluation.”

If the open-source concept works for the processor, it could spread to more of the IP on the chip. “We have yet to see major open-source activities in other functional components of the IC, but the same forces will apply beyond the processing unit,” says Oxland. “It is worth noting that RISC-V International, the foundation that develops and maintains the RISC-V standards, has expanded the scope of its work beyond the core and into the system as a whole.”

Heterogenous integration
An even bigger change in the IP market is coming from the need for heterogenous integration of multiple dies within a package. This change is creating some potentially new classes of IP. “If the chiplet model become widely adopted, and more affordable, it will reinvigorate the IP business,” says Ashraf Takla, founder and CEO for Mixel. “Chiplets give us more freedom to choose the best process for our IP instead of having to port our IP to a particular process technology that is used by the rest of the SoC. If a single chiplet standard becomes the clear winner, some IPs will be ripe for productization as a chiplets.”

There are many differing views on this subject, along with the progress of standards for heterogenous integration, which will be explored in future articles.



Leave a Reply


(Note: This name will be displayed publicly)