What Will That Chip Cost?

Establishing the true cost to develop an advanced chip is complicated, but headline numbers appear to be significantly inflated.

popularity

In the past, analysts, consultants, and many other experts attempted to estimate the cost of a new chip implemented in the latest process technology. They concluded that by the 3nm node, only a few companies would be able to afford them — and by the time they got into the angstrom range, probably nobody would.

Much has changed over the past few process nodes. Increasing numbers of startups are successfully building advanced-node chips for much less money than those highly quoted figures. Behind the numbers are some broad-based changes in chip design and manufacturing. Among them:

  • Many advanced-node chips are either highly replicated arrays of multiply-accumulate processing elements used for AI/ML. Those are relatively simple compared to integrating different components onto a single die, where they need to be characterized for thermal issues, noise, and various use cases and applications.
  • Advanced packaging, which has gone mainstream since those early estimates were generated, allows chipmakers to bundle together chips or chiplets developed at different process nodes, rather than trying to push analog functions to 5nm and beyond, which is both costly and non-beneficial.
  • In the past, moving to the latest node ensured market leadership for performance and power. That is no longer the case. Improvements at mature nodes, and architectural changes involving hardware and software, allow many chipmakers to delay migrating to the newest nodes at least until those processes are mature enough to be cost-effective.

One of the big problems with the early estimates is they were extrapolations of the best data available at the time. The predominant source was the International Technology Roadmap for Semiconductors, which was phased out 2016. In the ensuing years, the fundamentals of chip design and manufacturing have changed dramatically.

For example, many assumed that all new chips would fill a reticle, and that the size and complexity of designs would continue to grow. In some cases, the complexity did increase — well beyond the point where all of the latest features fit on a single reticle — but many of those new features are developed using a mix of the latest process geometries and established process nodes. In others, the number of processing elements in a package increased, but the complexity actually went down.

Software is another defining element. Not all software needs to be developed from scratch. In addition, there is a plentiful supply of pre-existing tools and ecosystems for Arm, NVIDIA, and increasingly, RISC-V designs. And nearly all of the big EDA companies are investing heavily in AI/ML to both shorten and improve the design process, particularly when it comes to software debug and more effectively leveraging expertise across a company through reinforcement learning.

The numbers
Back in 2018, the last time anyone made such an estimate, IBS published the chart shown in figure 1. This pegged the cost of a 5nm chip at $542.2M. If that were true, there clearly would only be two or three chips being produced today, and probably nobody would look beyond 3nm.

Fig 1. Costs to produce a new chip. Source: IBS 2018

Fig. 1: Costs to produce a new chip. Source: IBS 2018

If we go back a few years and compare this to the chart that IBS produced in 2014 (see figure 2), we can see how those estimates change over time.

Fig 2. Costs to produce a new chip. Source: IBS 2014

Fig. 2: Costs to produce a new chip. Source: IBS 2014

Note that the estimated cost of 16nm/14nm went from about $310M to $106M. And going back further in time, 28nm went from about $85M to $51M. Whether this is an overshoot in the estimated costs, or this reflects a very steep decline in costs once a new node has become more mature, is a matter of debate. But if the latest figures are discounted by a similar amount, that would put the cost of a 5nm chip somewhere in the range of $280M, with a 7nm chip at about $160M.

“Consider Qualcomm or NVIDIA,” says Isadore Katz, senior director for marketing and business development for Siemens Digital Industries Software. “If it really did cost $542 million to build a new chip, they plus a few others may be the only ones who could actually afford to go off and do it. But they’re not going to build a chip at 5nm. They’re going to take one architecture, make some innovations in that architecture as part of transitioning into the new process node, and then they’re going to develop a family of parts that operate at that process node.”

Few companies publish their actual costs, but you can look at the venture funding received by companies and find a crude cost by looking at how much money they had burned through when their first chip was released. “Innovium was built with $150 million for their initial chip, and then they received another round at $100 million, which funded multiple generations,” says Nick Ilyadis, senior director of product planning for Achronix. “Since being founded in 2014, Innovium received a total of $402M in funding over 10 rounds, and still had $145M cash on hand when it sold to Marvell in 2021 for $1B. Their third generation of chips were manufactured using a 7nm process.”

A significant portion of the cost is a first-mover penalty. “The expenses associated with large digital chips has exploded,” says Marc Swinnen, director of product marketing at Ansys. “That’s where those big headline numbers come from. If you look at what it takes for Apple to create a new chip, that’s 18 months, hundreds of designers, licenses, a whole new mask set, advanced processes. That’s when the costs ramp up. But if you can use an older node, those costs are much less now.”

There are several costs that could be hidden in those numbers, as well. “It does take a massive investment to re-characterize the new transistor functionality, to get the mask-making capabilities in place, to understand the manufacturing issues, to create the extraction models,” says Siemens’ Katz. “But we are taking advantage of lessons learned on prior nodes and once we’ve got those building blocks done, the BSIM-CMG model, the extraction model, the chip variations and metallization, we are then able to take advantage of the parameterized, or process independent technology that we have on the upper layers.”

The numbers have made others curious. “This is a chart (see figure 3) I created 12 years ago,” says Frank Schirrmeister, vice president of solutions and business development at Arteris. “I had received four or five sets of data from IBS, but could not publish the numbers, so the chart I created averaged the expenditure categories. This shows the major steps in chip development. It shows a timeline from RTL development to tape-out along the x axis. And then the percentage of the overall project effort is on the y axis.”

Fig. 3: Time and effort to create a chip. Source: Frank Schirrmeister

Fig. 3: Time and effort to create a chip. Source: Frank Schirrmeister

Based on figure 3, you can then consider if any of these change over time, or scale with size or production node. For example, it is often claimed that the cost of verification rises quadratically with size, even though this has proven not to be true historically. “Verification costs do rise because the bigger the design, the more time it takes to simulate, the more test cases you have to generate,” says Ilyadis. “There are baseline tests that you can use from previous generations, and you continue to run them. Then there are the new tests associated with the additional functionality that is being added. That requires more servers, bigger servers, more disks. It ripples through the infrastructure as additional costs.”

Is infrastructure included in the published costs? “The devil is in the detail of understanding what’s in those numbers,” says Arteris’ Schirrmeister. “Is all of the software included in that? How much new RTL development is in there? How much verification? Do you need to buy an emulator? When you look into the cost for masks, that at least touches the order of magnitude that these chips are.”

Some costs do decline over time. “When you consider the cost of IP, you either have to develop it, which uses in-house engineering resources, or license, which means you pay the vendor,” says Ilyadis. “Typically, licenses come with support and maintenance – that’s the cash outlays. Then there’s the tool costs. Every generation requires a new sets of tools, because the routing gets more complex or there are additional things to consider. There is the headcount of the team that is developing the chip. Plus, you have to build test fixtures, or even a product that will demonstrate your chip. Now we’re going outside of that chip itself, but it’s all things that are related to the actual chip development and what you need to take it to market. Then there’s that gift that keeps giving — software. Most of these chips have some sort of programmability. On top of that there is manufacturing, including the testers, test fixtures, and burn-in fixtures for doing accelerated life tests.”

Even IP costs can be a significant variable, especially if you consider the time savings that gained by buying in IP, or the indirect costs associated with developing the IP. “The increase in cost and complexity of SoC designs is putting more pressure on the computing infrastructure,” says Brian Jeff, senior director of product management for Arm‘s Infrastructure Line of Business. “This is driving a trend toward custom silicon in order to provide specialized processing for specific workloads, and obtain at-scale efficiency savings. By developing IP with a customizable foundation, it enables the IP provider to take on many of the common integration, verification, and validation tasks partners have had to repeat design-after-design. This frees partners to concentrate their resources on the features that will help them differentiate and shape a full chip design to their workload. In one example, a partner reduced the cost of their high-end infrastructure SoC development by 80 engineering years.”

Many of the costs are incremental. “We don’t re-learn everything between process nodes,” says Katz. “We remember the things we have to do. We’ve invested a lot in the parameterization or in the representation of design artifacts from the very top, the testbench, the way we describe the IP, and the way we articulate the custom logic and accelerators all the way down to how we lay out the cells. We understand where we have to make adjustments, and have dials and knobs where we can correct for that. No one starts from zero between nodes. Even if we change the way the transistor surfaces work, or we reorganize the way the first level personalization metal is going to operate, we do need to spend extra time characterizing for that. We need to spend extra time understanding how to extract that, and we may have to make small and modest tweaks to our cell designs to accommodate it. But the basic topology is there.”

Well-developed IP will be reusable across multiple generations of chips. Companies like Intel, AMD, Marvell, Broadcom, NVIDIA and Qualcomm develop a lot of their IP in-house. Some of this is in the form of chiplets, which can be fully characterized and re-used in pre-determined architectures. The tradeoff there is in-house expertise, but there also are fewer surprises in the field, and no licensing costs.

Cost of EDA
Every node creates some new problems and challenges, and that often requires significant investment by EDA vendors into new tools, or the creation of flows. When the node is new, many of those tools are crude and solutions are cobbled together with whatever technologies can be brought to bear on the problem.

Over time, the industry learns what works and what doesn’t work, and the flows improve, eventually being automated. “Many challenges are overcome with brute force, ” says Ansys’ Swinnen. “They took the available tools and with a sufficient mass of people and they made it work. That required close cooperation with the vendors. It is not a flow that you could give to regular mainstream chip designers. Over time, we learn from them, and they learn from us. The tools get better, get more automated, the rough edges have been ironed out, the manual steps in between have been reduced. That makes productivity much higher.”

What works today may not work in the future, though. “There is a portfolio of things that you have to plan for,” says Katz. “I have been involved in timing, process variation, and ground bounce. When you reduce your voltage thresholds down below 1V, many of these things become issues. That was unknown back when we were entering into 14nm. Today it is understood. People understand what can go wrong in the timing or the layout of the design. They understand the factors that you have to watch out for with respect to the metals’ contribution to delay and timing, and they’ve also gotten to be more and more aware of some of the physical side effects, sensitivity to glitch noise, sensitivity to leakage. These all add to the playbook. And the playbook walks you through each of the gotchas from the past 10 or 15 years. How do you address those? How do you automate those? Or, how you design those out?”

Another old chart that is worth relooking at is shown in figure 4. Andrew Kahng and Gary Smith did an analysis of design costs in 2001 to show how new EDA developments were affecting productivity. This was published by the ITRS in 2002.

Fig. 4: A new design cost model for the 2001 ITRS Source: Proceedings International Symposium on Quality Electronic Design 2002

Fig. 4: A new design cost model for the 2001 ITRS. Source: Proceedings International Symposium on Quality Electronic Design 2002

While this shows technologies of the future that never came about, such as ESL, other technologies did. Subsequent publications from the ITRS show that development costs do remain somewhat static, with only a slight incremental cost over time. Figure 5 (below) is their chart from 2013.

Fig. 5: EDA Impact on IC Design Cost, Source: Andrew Kahng, 2013

Fig. 5: EDA Impact on IC Design Cost. Source: Andrew Kahng, 2013

Development costs do go up, especially for new nodes. “The tools keep getting more sophisticated and have to scale with design size” says Ilyadis. “Typically, those updated tools may add 25% cost from generation to generation, and that’s where the tool companies make money. They have to develop, they have to put work into their tools to make them compatible with the next generation IP and whatever new challenges arise, so they’re going to pass that development cost on to you as an increased licensing fee.”

But that is not the case for mainstream developers. “In the economics of semiconductor design, the cost of the EDA tools is never a key consideration,” says Swinnen. “It’s a cost element somebody has to worry about, but in the overall economics of chip design, EDA is never the deciding factor. It’s manufacturing. Where EDA does impact the cost of the design is more in the productivity.”

What we are seeing is rapidly increasing infrastructure costs associated with EDA tools. “With AI being introduced into tool suites, it is easy to start doing more exploration of the design space,” says Schirrmeister. “Every data point in their charts means additional capacity and cycles run in the cloud. In order to get the best implementation, you now spend more compute effort. What used to be people multiplied by time and some infrastructure cost, is now becoming a re-distribution of costs, where the compute cost itself takes on a much higher role in the overall cost equation.”

Conclusion
No chip ever developed has cost what are shown in the published numbers simply because there is no chip that truly starts with a blank sheet of paper. Everything in this industry is based on the reuse of intellectual property, some of it tied up in IP blocks, some of it in BSIM models, some of it in the heads of the engineers that start a new company. The same, if not more, can be said for the software industry that is always building on massive libraries of code.

But those numbers are in the right order of magnitude for leading edge designs. It is important to understand the total costs associated with development, and not just focus on getting to silicon tapeout.



1 comments

Mike Cawthorn says:

Tks for your work here Brian.
I have gained a better understanding of the complexities of the industry.

Leave a Reply


(Note: This name will be displayed publicly)