IP Integration Challenges Rising

Need for more support from IP providers grows with complexity.

popularity

It’s not just that is putting a crimp in sub-28nm designs. As more functions, features, transistors and software are added onto chips, the pressure to get chips out the door has forced chipmakers to lean more heavily on third-party IP providers.

Results, as you might expect, have been mixed. The number of blocks has mushroomed, creating its own web of complexity. So while IP can and does speed up the design process, managing the complexity of the IP itself and the interactions between growing numbers of IP blocks is a challenge by itself—something that is complicated further by IP reuse and a mix of externally developed and internally developed IP in an effort to amortize costs and improve ROI.

“Typically there are 200 to 300 IP blocks and a large number of SRAMs,” said Hem Hingarth, vice president of engineering at Synapse Design. “IP has to meet requirements for software drivers, RTL design, verification and physical design views.”

Hingarth noted that design teams are utilizing larger IP blocks and subsystems to help mitigate these problems, but the problem is growing as complexity grows. This is particularly evident in the physical IP world, where 28nm looks a lot different than 20nm.

A new ballgame
“20nm is really a node that has a fundamental difference for IP for everything – for IP integration for sure, the coloring, the double patterning,” said Jean-Marie Brunet, product marketing director for DFM and place-and-route integration at Mentor Graphics. “20/16/14nm and below is basically the era of multi-patterning design. Anything before this, for example, 28nm, I would say that there were two domains of challenge for IP integration. One was the overall density checking.”

Density checking is often viewed from the standpoint of metal filling. The problem is that IP vendors often don’t know how IP is going to be used in a design. Typically the IP can be fixed relatively easily on its own. But when multiple IP blocks are integrated and design teams run a full-chip density check, issues emerge.

“That was really visible at 28nm, and it continued to be a dramatic problem. And now we’re looking at 10 and 7nm and it’s even more complicated than 28,” Brunet said.

The second challenge involves lithography and how to make sure certain patterns will print correctly for a process window. The issue of context applies here, as well. “We can verify that an IP is clean by itself or within a certain assumption of the context that is going to be around it,” said Brunet. But he added, “When you do that you need to exhaustively look at all of the domain dimensions to be sure that in that contextual situation, you cover everything.”

Jim Lipman, director of marketing at IP provider Sidense agreed, noting that one of the problems with shrinking features is variability during the process and trying to account for the variability. “Tie into that design for manufacturability, as well, and how do you keep your yields up when you’ve got parameters that have a much broader spread percentagewise than they used to at, say, 65nm and 45nm. Now you’re going down to 28 and 20, finFETs and beyond. It’s a problem.”

Advanced node requirements for physical IP
For physical IP at advanced nodes, when somebody looks at an IP they are going to purchase, there is typically a checklist of views and files that are coming along with the IP. Brunet noted that traditionally, the GDS view of the hard macro IP is provided, which is the full-blown detail view of what that IP is about.

“The GDS is the ultimate reference,” he said. “You cannot have more detail than the GDS. But when you utilize and integrate that IP, you really don’t manipulate the GDS, you manipulate abstract views. That’s what most place and route systems and the chip integration guys manipulate, because it’s smaller in terms of footprint, size of the data and it’s an abstract. The problem we see is that very often now with advanced nodes, particularly with coloring and density requirements, the abstract view is not appropriate anymore. You can’t really efficiently manipulate an IP by just relying on the abstract, so what we see is a compromise. People say, ‘I don’t want to manipulate the abstract, I’ll manipulate the full GDS.’ There is a lot of implication with this. Your system has to be able to manipulate that GDS completely, so we see an explosion in file sizes in many different domains.”

He stressed that abstract models cannot be relied on as much for physical representation as had been done in the past due to density, lithography contextuality, and coloring. As a result, the challenge for the IP design house is to enable that IP to be easily integrated within many different contexts. “On the IP integrator side, relying only on very abstract models is very complex, very challenging – almost impossible.”

Some IP is fully synthesizable, which simplifies the move from one process node to the next. “The only real gating factor is the EDA tools and how good they are,” said Kurt Shuler, vice president of marketing at Arteris.

To be, or not to be…software
Still, it is no surprise that on the soft side of IP, much comes down to the software. Shabtay Matalon, ESL market development manager for Mentor Graphics’ Design Creation Business Unit, puts IP into two categories: IP that contains software, and IP that is controlled by software.

There are implications for both, he said. “One of the largest IP providers we know is one that provides IP that runs software, and that’s ARM. In terms of peripherals and memories, those are IPs that do not embody software but do require software. To build an SoC is almost like connecting elements—IPs of hardware, and IPs of software—in a way that addresses heterogeneous or homogeneous multicores with different stacks of software. Some of that work involves bare metal, some goes on top of an operating system. And then you assemble everything together. The assembly of all the complexity and building the things that are missing, either custom blocks or configuring the hardware/software or adding software, is really the challenge that the integrator has to deal with. At the end of the day, they want to differentiate along several axes: functionality, performance, and in many cases, low power.”

To enable design teams to accomplish these goals, the IP provider needs to provide more than RTL or even RTL with a SystemVerilog testbench that can stimulate the interfaces of the IP. “What needs to be provided, whether it is by the IP providers or EDA vendors, is a model that is abstracted – basically a transaction-level model that, as a bare minimum, should contain the functionality so you can exercise the model with the exact software that would run on the final SoC and the final board,” said Matalon. “We are almost there in this regard that users are expecting a peripheral or any IP will have an abstracted model such that you can run software fast and drive it fast on a virtual prototype.”

Another aspect of IP integration at this higher level is the ASIC design abstraction, which Drew Wingard, CTO at Sonics, said has been protected through 14nm. Despite difficulties, and predictions about the end of , there will still be companies driving to advanced nodes. “Many people have reported that the cost per transistor doesn’t really go down. You go there because there is a benefit that you can’t get any other way. I can make my chip fit on a die that I can manufacture at reasonable yield or I get access to the threshold voltage shaping and control that I get by going to a finFET technology, and therefore I can do a better job of engineering the leakage current to be an acceptably low value. There are reasons like that driving people there. It basically boils down to being able to put more stuff on a chip.”

Both sides of the coin
As an ASIC supplier, eSilicon relies heavily on IP providers to give them the type of IPs that they need when sometimes as much as three-quarters of a chip are coming from IP providers from the outside, noted Patrick Soheili, vice president and general manager for IP Solutions. “Whether it’s a Cadence or a Synopsys or this or that or the other, we are relying on a lot of blocks from the outside to come in and when you’re talking about $5 million, $6 million, $7 million or $8 million mask costs in finFET-type technologies as opposed to the $2.5 million or $3 million or $4 million mask costs in 28nm, the cost of failure is huge—and that doesn’t even consider the time to market that you will lose. Just the fixed-cost aspect of it is incredible, let alone the opportunity cost that usually goes along with it. The amount of validation, characterization and the amount of time it takes to take these IPs from the IP suppliers and on top of that add the complexities of the new processes and technologies – the double patterning, etc. – just adds longer delays to get all this work done and still be prepared to sell those pieces of IP to guys like us so we can prepare our customers chips.”

At the same time, he said several other trends are interlacing these problems. First, the cost of experimentation is rising, so fewer companies are able to do it. Second, the total IP ecosystem is shrinking, so the number of choices is going down. And third, all of the challenges that ASIC vendors have in dealing with IP suppliers are also challenging the IP providers.

“We had to effectively double the number of resources roughly that we’ve had in layout to accommodate for the double patterning, and some of the complexities and coloring in the 3D world as opposed to the planar world,” Soheili said. “We had to buy more sophisticated tools and we had to do bigger, badder simulations and verifications. We now have server farms much, much larger than they used to be to do simulations and validations than in the past. The type of Monte Carlos that you have to run, and margining that you have to do, are far more complex. And depending on whose technology you’re working with – whether it’s a TSMC, or a GlobalFoundries or a UMC or a SMIC, or anybody for that matter – there are also some on-the-fly changes that are coming from their PDKs because they’re not quite mature yet, depending on what stage you get started with building those.”

Making IP integration easier
The users of advanced nodes, according to Johannes Stahl director of product marketing for system-level solutions at Synopsys, are still predominantly in fast moving markets that have large volumes, notably consumer-type markets. “Those companies, if they are in the SoC business, have no time. They have two requirements: they have no time, and the semis in these markets have to look upwards. They have to spend more time thinking about how the chip can meet the requirements of the OEM rather than spending time on how to actually get it done. The getting it done part has to be more and more automated, and more and more outsourced – and the IP is of course part of it.”

What this means for IP providers is more preparation of the IP to make it ready to use than in the past. “On the digital side, it means if traditionally we shipped a piece of RTL to the customer, you shipped documentation that goes along with it, the customer is going to figure out how to use this IP,” said Stahl. “That was true when the semis still had design teams that would actually understand the IP. Today, if you look into a USB standard, for instance, and you look into the evolution, the semiconductor company’s design teams have no time to track the evolution of the standard. They just don’t know what the features are in the new version of the standard, so they ask us, ‘Hey, you ship us this IP, you ship us documentation, but we have no time to read this. Help us more.’”

This is why Synopsys created its IP Accelerated Initiative, which includes a much more complete view of what the customer needs for the IP such as a preconfigured reference design.

Steve Roddy, Xtensa product line group director at Cadence, has seen the same trend. “We’re seeing people looking for more complete packaging or complete implementations. The guy looking to buy, say, a PCI controller and a MIPI controller certainly wants to have consistency of programming API. ‘I want drivers and I want them to look the same so that my software development side of the house can have some consistency depending on which set of I/Os I want to populate in a given chip. I don’t want to have to dramatically change my middleware to deal with different drivers for different types of controllers.’ People want more comprehensive software support to go with the complexity of the cores and they want consistency across those.”

At the end of the day, the biggest single challenge facing IP vendors and customers is IP quality and model consistency improvement. They must agree on the quality requirements of IP blocks, Hingarth said. “The move to fully supported subsystems IP adds more complexity, so the methodology has to be in place to manage this. Hardware-software co-design support is needed with links to implementation. A key part is having models that allow software development. What is needed is a continuum from a virtual prototype model to simulation, emulation and rapid prototyping. In this way, the models used by software developers are directly tied to implementation.”

Finally, better support is needed for integrating IP subsystems with a standard methodology for controlling system-level functions.



Leave a Reply


(Note: This name will be displayed publicly)