ML, Edge Drive IP To Outperform Broader Chip Market

New applications, architectures and customer base provide continuous stream of opportunities.


The market for third-party semiconductor IP is surging, spurred by the need for more specific capabilities across a wide variety of markets.

While the IP industry is not immune to steep market declines in semiconductor industry, it does have more built-in resilience than other parts of the industry. Case in point: The top 15 semiconductor suppliers were hit with an 18% decline in 2019 first-half sales compared with the same time period last year, but most IP providers barely felt the squeeze. The increasing need for large, complex SoCs and a growing reliance on IP to provide critical functions in the fastest-growing markets helped smooth out a rough patch for IP providers during the first half of 2019. As a result, the IP market showed only a slight decrease in growth.

Semiconductor IP sales generated $835 million in Q2 of 2019, up 19.7% compared with the same period during 2018. The four-quarter moving average was 8.5% higher than the previous period, according to a Sept. 17 report from the Electronic System Design (ESD) Alliance Market Statistics Service (MSS). IP sales for Q1 were 14.8% higher than 2018. (Sales in Q4 of 2018 were 3.5% lower than the previous year due to changes in accounting and reporting by IP providers during 2018, according to the April ESD Alliance report.)

The IP market could reach $10 billion per year by 2022, compared to licensing revenue of $2.7 billion in 2018, according to a study last year by Semico Research. According to projections, memory IP will make up 13.3% of the total market, while the market for eFPGA IP will grow at 42% per year, which is faster than any other part of the IP market.

The big driver of growth will be the continuing need to integrate more functions into fewer devices at the system level, in ever-smaller footprints, with artificial-intelligence-related technologies as the most recognizable factor, according to Rich Wawrzyniak, Semico’s principal analyst for ASIC and SoC.

These kinds of growth numbers are long overdue for the IP industry. While IP growth has been steady, consolidation across the industry in the early part of the decade sharply limited the number of chipmakers. That has changed significantly with the rollout of AI/machine learning, the cloud, the edge, and the surge in automotive electronics due to both vehicle electrification and assisted/autonomous driving. In addition, there are many newcomers to the market, such as Google, Amazon, Facebook and Alibaba, which are now developing their own chips and which view third-party IP as a faster way to get to market with proven technology.

“The overall size of the IP market is around $3.6 billion, according to IP Nest, and that sounds about right to us,” according to John Koeter, vice president of marketing for Synopsys’ Solutions Group. “We’ve actually been outperforming the market with growth in the 8% to 10% range, and there are a couple of major drivers for that. First are the new market segments — things like AI accelerators, which drive a significant amount of business for us. Automotive has been a small percentage of the business historically, but now it’s pretty significant and growing very, very rapidly.”

The IP market continues to grow for many of the same reasons. Analysts expect sales to rebound enough to put the rest of the industry in positive growth numbers before the end of the year—fast-rising demand for efficient, often custom-designed inference accelerators for machine-learning applications in everything from data center servers to autonomous vehicles, smartphones, and IoT devices, according to Patrick Soheili, vice president of business and corporate development for eSilicon.

Accelerators for machine learning and other capabilities in high demand are expanding the total addressable market for IP, but they also are fueling growth for IP that is able to deliver a relatively rare function or level of performance rather than parts of the market that are larger but more generic, Soheili said.

“Another thing [driving the market] is the increasing complexity of IP management and integration, and the ability to build power efficiency into extremely high-power SoCs for customers who want very high power that doesn’t use much battery,” Koeter said. “The trend is for high levels of integration in an SoC. But we’re seeing the architects for some of the leading customers looking at things like combining one architecture in 7nm and another in 28nm, for cost reasons maybe, because it may be better to have a heterogeneous architecture rather than immediately just go to the next node. And especially with machine learning, you see a lot of different architectures because you can implement a specific algorithm in different ways. So the performance is different, depending on the hardware. If you implement a targeted algorithm in a dedicated hardware design, then you know it’s optimized so you get a big boost on both cost and power.”

Shifting direction
ASICs always have been the best way to achieve maximum performance with the lowest power and area, but that approach is much more difficult to justify at the leading edge. One reason is that many of these chips are being designed for AI/ML applications, and the algorithms are in an almost constant state of flux. In addition, many of these devices are being designed for specific market applications, which don’t require billions of units of the same design. As a result, they are beginning to modify their design strategies, and IP plays an increasingly important role in this.

“As we went from 65 nanometers to 42 to 20 to 16, 14 and now 7 nanometers, the cost of doing ASICs has astronomically increased,” Soheili said. “So what used to be maybe a good $5 million to $10 million for an ASIC [design and verification] is now between $30 million and $50 million just to get to netlist sign-off.”

Buying components as pre-verified IP can save design time and reduce the risk of errors. It also helps produce a workable design that requires much less effort to optimize for specific set of requirements, particularly when that IP is customized for a specific application.

“For example, if you just need a memory core, you go do TSMC, download the compiler, generate your memory instance, and you’re done,” said Carlos Maciàn, eSilicon’s senior director, innovation and competitive strategy. “But customers at the cutting edge of data center replication don’t need the full range of sizes and frequencies memory comes in. So you can customize the memory by focusing on the higher frequencies and just the sizes that make sense, and by optimizing for a very narrow set of performance requirements to improve performance or reduce power.”

That can have a big impact on the power, performance and area equation.

“You do that for the three, four, five instances that really matter, and customize around the periphery of the memory to strip out physical implementations you don’t need, for example,” Maciàn said. “You don’t touch the details at a certain level that is sensitive and needs to be approved and characterized by the foundry. But there are lots of things you can do on the periphery of the memory that will use 30% or 40% or 50% less power or area, or add 30% more performance. And if you’re using thousands and thousands of instances of that memory, that gain at the seed level can be the difference between a competitive and a non-competitive product.”

Plenty of IP components don’t need much customization, even when they’re combined with building blocks assembled from code re-used from other SoC designs. But that is becoming less true as requirements for data handling and inference increase, according to Farzad Zarrinfar, managing director of the IP Division at Mentor, a Siemens Business.

“If you look at the whole SoC, the percentage of memory is increasing, so 50% of the die sometimes is memory,” Zarrinfar said. “With convolutional neural networks you have a distributed computing model that can take up 70% of the die because they need to do a lot of efficient computation but can’t rely on having two or three large processors to do the job the way SoCs traditionally could. Instead, with convoluted neural networks you have distributed computing models where you apply near-memory or in-memory computing, use a multiply-accumulator function, and have everything sit close to memory to minimize data movement, minimize power consumption and maximize speed.”

It still needs to have good routability to eliminate roadblocks, and good power efficiency and thermal management, especially in edge devices where microcontrollers can turn on parts of the SoC, executive a function, and go back to sleep, Zarrinfar noted.

Programmable IP
As with most segments of the IP market, there are multiple options for inferencing accelerators. ASICs, however, have become the default option because they’re more appropriate for high-profile mobile devices like smartphones, and they are what Google used when it designed an accelerator for its own datacenters, according to Steve Mensor, vice president of marketing at Achronix.

This is where embedded FPGAs make sense, because they add programmability the ASICs. That programmability is essential in new markets, where protocols and algorithms are being developed or changing, because it extends the lifespan of an expensive design.

“There are two classes of companies that talk to us about this,” Mensor said. “One is adding some kind of function, but they run into a challenge where the ASIC is defined and built but doesn’t meet the requirements because some algorithm changed or they needed a different I/O adaptation or co-processing function. They’ll say it would be great to have an eFPGA for the universal programmability and the capabilities of the bigger die area, but they’d have to get their arms around a fundamental change to the cost structure of what you’re trying to offer. The other class of company is one that’s not already in the business of offering a certain product in a certain price range. They need to figure out if they can use an eFPGA as an integral part of something they want to create, and have the time to reconcile the SoC cost structures and implementation details just as they would with any other product.”

Both ASICs and eFPGAs are well understood, and there are plenty of tools available. But it may take some time before volumes ramp up again in the ASIC world because the number of markets that support the development costs is limited.

“Other than Google, from our perspective, nobody is shipping in very large volume,” Soheili said. “Everybody else is much smaller. Their vision of what they want to do is often bigger, and there is a lot of activity among the Super Seven (Google, Amazon, Facebook, Microsoft, Alibaba, Baidu, Tencent) and other companies that want to provide chips — but not in big volume so far.”

He noted that not all of those companies are even sticking with ASICs. Microsoft, for example, went with a mix of ASICS and FPGA-based SoCs to get the best mix of flexibility and speed.

There are other types of programmable logic, as well. Digital signal processors, for example, have been customized across a wide range of applications, and they are showing up in a variety of new ones where they never played a significant role, such as in automotive and industrial applications.

“One of the advantages of this approach is that you can set a maximum frequency,” said Lazaar Louis, senior director and head of marketing and business development for Tensilica products at Cadence. “You can put bounds around it, so even though the IP is capable of doing more, you can put it in a box where it meets those requirements. That way you can take into account the number of hours it will be used, the conditions it will be used under, and the years of expected life. Then you run it at a lower voltage and design the product to operate in the best way possible over that range of conditions and the expected lifetime.”

Focusing on startups
The increasingly stiff requirements for inference acceleration in edge devices — and the huge number of startups pitching new approaches to making AR/VR as well as ML inference possible in those devices — are key reasons why major IP providers are working unusually hard to attract startups, according to Mike Demler, analyst for The Linley Group.

“Arm is trying to make its terms a little more accessible to startups, so they don’t have to pay a big upfront fee before accessing the IP and then pay the rest when they finish their chip,” Demler said.

Cadence, Synopsys, and other IP providers also are courting startups in order to avoid missing connections with one that eventually becomes a huge success, and to stave off open-source challengers to the status quo, especially RISC-V. The open instruction-set architecture, which was developed with the support of many big chipmakers and OEMs, can be implemented alone or alongside commercial IP components.

Related Stories

IP’s Growing Impact On Yield And Reliability

Why IP Quality Is So Difficult To Determine

Leave a Reply

(Note: This name will be displayed publicly)