IP Requires System Context At 6/5/3nm

At each new process node, gates are free. That opens the door to a lot more IP blocks, and a lot of new challenges.


Driven by each successive generation of semiconductor manufacturing technology, complexity has reached dizzying levels. Every part of the design, verification and manufacturing is more complicated and intense the more transistors are able to be packed onto a die. For these reasons, the entire system must be taken into consideration as a whole – not just as individual building blocks as could be done in the past.

Design IP, either as a subsystem or individual component, still must be verified and validated on its own. It also must be simulated, prototyped, and emulated with knowledge of the system it will operate in.

“Semiconductor process technology advancement is moving at a rapid pace. We are now seeing a new node being introduced almost every year,” noted Tom Wong, director of marketing, design IP at Cadence. “The pace of Moore’s Law is relentless. When we thought 7nm was the end of Moore’s Law. The industry surprised us yet again with 6nm and 5nm, and now 3nm gate-all-around. For every step toward a finer geometry, challenges due to semiconductor physics and lithography get more difficult. Mask sets get more costly, and lithography problems are magnified. This is why we have immersion and multi-patterning and EUV. The cost of design is so expensive now that first-time success is mandatory. You cannot afford the cost of a second spin, and missing your market window is unimaginable. This puts a huge burden on designers, EDA companies, IP companies, foundries, etc.”

Just the fact that there is more IP to contend with is reason enough to consider IP within the context of a system.

“Chips routinely include many hundreds of IP blocks, and managing that becomes a challenge,” said Rupert Baines, CEO of UltraSoC. “That challenge grows exponentially, creating the ‘systemic complexity’ effect and making it the main task of the chip development team – throughout the flow from the architect to the engineers doing bring-up and customer engineering, and then on to the customers of the chip doing their integration.”

At the same time across the process is encompassing hardware and software development, and it is hard to manage that complexity, he continued. “To understand how those IP blocks are going to interact between themselves, and with the software that’s going to run on them – that is critically important, but incredibly hard.”

With advanced nodes, gates are basically free, which makes it easy to add more and more IP blocks.

“What’s driving IP growth has a lot to do with technology and people moving to advanced process nodes, because they’re outsourcing more and more of a design,” said Chris Jones, vice president of marketing at Codasip. “This translates to more processors, and given the expense of advanced nodes, anything that you can move to the software realm to ensure that you don’t have to do a silicon respin is great.”

That, in turn, drives the need for more microprocessors and the need to account for them within the overall system, Jones said. “If you’re going to advanced process nodes, these are big, big gambles. These $50 million, $100 million projects, and it can be catastrophic for a company to have a chip come back with dark silicon. Because of that, there are many more requirements and constraints put on processor providers for quality, simulation and emulation.”

Power constraints and packaging options
As more IP blocks are included, though, power-related concerns rise proportionately. Noise from the power delivery system, switching near mixed-signal blocks, and power dissipation all cause problems. And with large and complex SoCs, such as those used for AI and deep learning, the number of blocks that are always on is much different from designs in the past.

This has an impact on the chip, but it also has an impact on thermal management in hyperscale data centers, where powering and cooling racks of servers is expensive. “

“If you wanted to minimize power, one of the most important decisions that needs to be made is whether to embed and do everything on a single SoC and a single package, or do partitioning and do a multi-chip solution,” said Farzad Zarrinfar, managing director of the IP Division at Mentor, a Siemens Business. “In a single-SoC embedded approach, the power consumption can be minimized. You don’t need to drive a high capacitive load of I/Os in the packages. This reduces heat and power consumption, and also has an impact at the system level. At the same time, there are other important derivative benefits that come with it, such as security. When you’re dealing with chip-to-chip communications, there is always a possibility of tampering with and accessing pin-to-pin and chip-to-chip communications with probes and monitors to detect the communication and security content. But when it is embedded, the monitoring and tampering is virtually impossible, or at least it’s a much, much harder task.”

Complex 2.5D designs contain IP from many sources. “It used to be enough to ensure you have a complement of best-in-class IP for a successful design,” said Mike Gianfagna, vice president of marketing at eSilicon. “Beyond this, you would need to ensure the appropriate standards were supported for interoperability and connectivity. Those days are gone.”

Integration issues
Once the quality and standards support of the individual IP are ascertained, the integration challenges begin. Things like metal stack, design for test, reliability, control interfaces, operating range and reliability all matter, and inconsistencies create challenges. To hit power, performance and area (PPA) targets, the IP also needs to be adaptable to the specific application. In the system context, software and firmware also need to be considered. Many chip bring-up programs for complex systems are plagued by firmware and not hardware errors or inconsistencies, Gianfagna said.

Ironically, IP always was seen as a way to accelerate development.

“IP reuse may help to grow and scale system designs much faster,” said Zibi Zalewski, R&D director and general manager for Aldec Hardware Products Division. “There is more space for integrators who re-use available IP, which also allows them to focus development on new features and algorithms instead of common elements of the system. That actually makes complicated projects in the range of smaller design teams, with the reuse and customization of available systems. Design engineers may re-use processor and GPU IPs and develop only functions needed. Such an approach specializes the engineering market.”

But it also changes the dynamics of the design process. “There are IP development companies, integration teams and new hardware developers,” Zalewski said. “The system level is still doing fine, but the complexity and cost of such projects has grown significantly, which makes it less accessible for smaller engineering teams. With all the new technologies booming, like machine learning, data centers and automotive, we will need new chips designed that consider all of those requirements, and that means system-level design will still be growing, architecting the systems based on IP reuse and dedicated modules development.”

But not all IP is the same, including standard IP. In fact, in complex chips IP increasingly is customized for competitive reasons. So while it still offers a faster time to market, it also adds some unique challenges.

“You do not have the time to develop a standard interface, a memory, a sub-block, so IP was seen as a way of accelerating the development cycle by using predefined blocks,” said Mick Posner, director of marketing for IP accelerated at Synopsys. “If you look at the history, I can go back all the way to the Design Compiler foundation library, the building blocks, smaller FIFOs and things like that. Those are generally missed as the first bits of IP because they were synthesizable logic that was just inferred in the code. I remember dealing with a plus sign in VHDL in Verilog. That was the first piece of IP. It’s a synthetic operator, and we would map that to a type of adder or subtractor, depending on the constraints. Back then it was all about area and performance as the constraint. You wanted to get the best design. But now you look forward, you still need to make the PPA, but the project cycle is the biggest constraint and, hence, complex IP.”

Customer requirements have evolved over the years too, said Codasip’s Jones. “Initially, it started off as RTL blocks and USB 2.0, the original PCIx, and things like that. But as those protocols evolved and new protocols came out, like MIPI, HBM2E, the advanced protocols and interfaces require more than just RTL. It’s a combination of mixed-signal block timing closure of those analog components and, of course, an RTL function. Now, in the context of an SoC, customers want more, and that’s where the context of subsystems come from. What sets a subsystem apart from traditional IP is that it really only has value in the configuration the end design requires.”

Mitigating the challenges
To highlight the areas that can move the needle toward solving the challenge of IP within the system, Cadence’s Wong suggested there are various facets of design and verification to consider in seeking new ways to ensure first-pass success, including better TCAD, SPICE model and RF design tools. Add to that list IP susbsystem validation, chiplets and 2.5D, hardware emulation, C modeling and chip-board co-design.

“When you look at the IP ecosystem, you have foundation IP (standard cell libraries), memory compilers and general-purpose I/Os,” Wong said. “Then you have CPU cores, DSP cores and graphics cores. For complex heterogenous SoCs, you will find a fabric (NoC) IP. These are all the basic building blocks of a modern SoC.”

Foundation IP is usually provided by the foundries, while interface IP is available by third-party IP vendors. The interface IP can be subdivided into hard PHYs and controllers (RTL code). Hard PHYs are process-dependent, while controllers are RTL code that can be synthesized into a physical form using a particular library. Examples of interface IP include LPDDR4 PHYs and controllers, PCIe PHYs and controllers, etc.

The PHYs and controllers work in concert with each other. To ensure interoperability, a lot of testing and verification needs to be done between the PHY and controller. To ensure first-pass success in SoC implementation, designers must decide if this burden should be borne by the IP suppliers or by themselves. Procuring the PHY from one supplier and the controller from another supplier is a really bad idea, especially for automotive applications where you also have to qualify the IP for functional safety.

For dedicated DSPs used in audio, vision and communications, the issues are quite similar, “You want to rely on the IP provider to prepare the system and do all the validation that is possible to perform upstream,” Wong said. “Then you can have confidence that you can rely on some form of correct-by-construction by burden sharing. On top of that, audio, vision and communications solutions are not hardware solutions only. They are, in fact, system solutions where you must make sure the operating system, debugger, tool chain and application layers are all validated together. A piecemeal solution will introduce too much risk. Beyond IP subsystem validation, hardware emulation, C modeling and chip/board co-design will all play a major role in early validation to ensure successful deployment of complex SoC designs.”

Back in the days of custom ASICs and custom designs, Kevin McDermott, vice president of marketing at Imperas Software, recalled talking about VLSI design (now SoC). “Then, people talked about these Frankenstein chips, which are multiple subsystems all merged and integrated. You can draw the analogy with Moore’s Law. As technology advances, each node allows you to absorb more and more functionality. And the classic statement is that it saves an extra chip on the board, so it reduces the power systemwide. It reduces the cost and also at the interconnect. Having very smart functional standalone chips means that with a PCB you have to deal with that level of interface. Putting them on a chip, you suddenly have really wide buses and really nice communication.”

This has made system design relatively straightforward, but it’s getting much more difficult at 7nm and below.

“We’ve absorbed and integrated things and felt quite good about ourselves having all these great cores, but we haven’t really looked at it from a traditional or correct and system level view,” McDermott said. ” If you’ve got the mentality or the approach of one-size-fits-all, life’s going to be tough. And what we’re seeing in these new markets — the IoT and the ML and AI — there’s no de facto solution. There’s no reference single hardware configuration everyone has bought into. But also on the software side, there’s no one OS structure or protocol. There’s no one way of addressing these algorithms. And there are so many nuances within each of these big markets. It’s never really happened before. We’ve got unknown software and unknown hardware trying to solve a problem.”

With unceasing complexity, and chip architectures changing to account for a slowdown in device scaling, the role of IP within the context of system design is becoming much more important, and much less structured.

The good news is that technologies abound to address the issue from many angles. The bad news is there is a lot more IP to integrate, and much of it is anything but standard.

Related Stories
Making IP Friendlier
Some IP issues remain unsolved even after 20 years. What the IP industry needs to change for growth.
IP Tracking And Management
What are the important elements of an IP tracking and management system for providers and users?

Leave a Reply

(Note: This name will be displayed publicly)