The Real Numbers: Redefining NRE

Costs are increasing per node, but not as much as you’d expect in some areas—and lots more than you’d expect in others.

popularity

Developing ICs at the most advanced nodes is getting more expensive, but exactly how much more expensive is the subject of debate across the semiconductor industry.

There are a number of reasons for this discrepancy. Among them:

  1. As design flows shift from serial to parallel, it’s hard to determine which groups within companies should be saddled with different portions of the bill. The result is that some groups are slammed with increased costs, while others barely see any increase.
  2. There is a big difference in costs from one market to the next, and sometimes from one design group to the next. Just as no two chips are alike, no two organizations share the same capabilities or the same challenges.
  3. Mask costs and time spent on processing masks continue to rise, but not all metal layers require double or multi-patterning.
  4. Companies that get it right the first time can shave significant amounts of money off the overall cost, but companies that optimize that design further ultimately can win the market.

While initial estimates for new chips at 16/14nm have ranged as high as $300 million, the reality is that some companies are developing chips at these process nodes as cheaply as $12 million to $15 million, and more often for $25 million to $35 million. That’s not just NRE, either. It’s production silicon.

What’s easy to confuse with NRE, however, is how internal costs are assigned by companies to amortize them across departments. The cost of moving to a new process node can be huge. Qualcomm, which reported revenue of $24.9 billion in fiscal 2013, said the price tag is $2 billion. Smaller companies have put the number at between $500 million and $1 billion, which is comparable based on their revenues. And that number also is being spread out across more design starts, which sources say are on the rise as big companies seek ways to recoup their increasing investments. This is especially important because at advanced nodes, IP does not necessary work the same from one node to the next, designs cannot be moved from one foundry to the next—or even one flavor of a process within the same foundry to another. Moreover, design times are stretching further than market windows, requiring development at multiple nodes simultaneously.

Internalizing the shifts
A big change now underway inside of chipmakers involves the redefining how to categorize in-house expertise from a business cost/benefit standpoint.

“Back in 2005, silos were built around subsystems and integration of hard IP,” said Charles Janac, chairman and CEO of Arteris. “At 40nm, architecting chips at the system level required more floor planning, so you had to give feedback to different groups. That works reasonably well—some companies are more effective at it than others—but more recently we’ve seen a trend toward parallelization. You do the same pieces of a task in parallel. We’ve seen that with the NoC, where teams work in parallel and then create a feature that brings them all together. There’s a much greater use of IP and more orientation toward a platform. Then they prepare to build derivatives, with less or more IP for different market segments.”

The amount of third-party IP now being used by chipmakers has exploded. Estimates range from 35% to 50% of a total design, depending upon different markets and who is building the chip, and it’s one of the areas where chip designs can go quickly awry. In some cases, chips have more than 100 IP blocks, most of it black boxes to design teams.

“There’s a price to be paid in mixing and matching IP,” said Taher Madraswala, president and CEO of Open-Silicon. “If you’re putting together an SoC with, say, 30 pieces of IP, maybe 20 are compatible from a metal layer standpoint or oxide thickness. Functionally they look okay, but electrically they might not be. Knowing that isn’t obvious, and companies get torn between conflicting needs of trying to control everything themselves and the need to get it right the first time and in the shortest time.”

That has a direct bearing on NRE. Mistakes at advanced nodes obviously are expensive. But the biggest issue isn’t the IP or the hardware. It’s the software to manage the IP, which multiple industry sources say accounts for as much as 70% of the NRE at some of the largest fabless chip companies.

“NRE is huge from an ASIC point of view,” said Mike Gianfagna, vice president of marketing at eSilicon. “But once you invest that, you can own the market and basically print money because you can amortize that cost over the back end. What happens, though, is it forces the little guys out of business and thins out the pack. If you’re building a custom chip, the NRE is amortized, absorbed, and capitalized over the life of that business and across multiple variants.”

Divide and conquer, with a difference
Managing that NRE effectively, though, is becoming tougher, particularly as chips get more complex and time to market pressures increase.

“Silos are now centers of excellence,” said Randy Smith, vice president of marketing at Sonics. “They’re for everything from circuit design to failure analysis. But they also need to be interconnected because you want experts, but you want their involvement to be interactive. With power and security, there are multiple ways to get at problems, and they both require collaboration.”

That appears to be the general consensus throughout the EDA and IP worlds. Collaboration, integration and flexibility are required, and expertise has to be integrated across all areas where it makes sense.

“Divide and conquer helps manage complexity,” said , group director of marketing at Cadence. “But maybe it would be better if the dividing lines were redrawn. With power, the current way of getting an accurate picture is average worst-case scenario. That includes cell libraries, I/O, mitigating IR drop, power switches. But you really need to manage power up and down the solution stack because while 80% of the potential power savings are set in the architecture, 100% can be lost downstream.”

Carlson said that what’s required is a methodology change, so that information flows in multiple directions, from design to emulation to debug. “This is a big deal for folks to swallow. If you get software with a clean build and a clean trace, you need to be able to run that as the current version on hardware. These are not insignificant touch points in the design environment.”

Predictions of these kinds of problems aren’t new. Gary Smith sounded the alarm bells as early as 1997 when he coined the term electronic system-level design, but system-level tools have been slow to catch on. As NRE costs increase and problems become more global to design, that’s finally starting to change.

“I do see a significant shift in the willingness of organizations to invest in ESL design,” said Jon McDonald, technical marketing engineer for the design and creation business at Mentor Graphics. “Generally the shift is driven by the organization coming to understand what it has cost them in the past to proceed with a poor system design. Often they have had failed projects or missed on key product specs due to architectural choices that could not be compensated for in software or RTL. Once an organization understands the cost of this type of miss and that they have had these problems on previous projects, it is much easier to justify the upfront investment in ESL design.”



Leave a Reply


(Note: This name will be displayed publicly)