As the success rate for the semiconductor industry declines, it may be time to rethink our priorities.
The headline numbers for the new Wilson Research/Siemens functional verification survey are out, and it shows a dramatic decline in the number of designs that are functionally correct and manufacturable. In the past year, that has dropped from 24% to just 14%. Along with that, there is a dramatic increase in the number of designs that are behind schedule, increasing from 67% to 75%. Over the next few months, a lot more data will be released, and I expect it to show a systemic problem in the industry.
It is easy to blame it all on the leading-edge designs. They get all the attention. But there just aren’t enough of them to account for the depth of the problems that are on display. The problem is more fundamental. It is related to AI, even though many think of it as the savior for the industry; the new technology driver, the next frontier.
AI is demanding increases in compute power significantly in excess of the traditional semiconductor progress rate — beyond even the gains we have seen architecturally. At the same time, there have been no significant breakthroughs in development or verification productivity, meaning that teams are expected to deliver a lot more, with the same tools, in the same or less time. That is a setup for failure.
It is naive to think that a virtuous cycle will emerge where AI will help to design better computers, which will in turn speed up AI, make it more powerful, then rinse and repeat. There is no way that AI can come up with the architectural innovations necessary to support that. At best, it can optimize designs and implementations, and perhaps improve the efficiency of verification.
Silicon Valley was born on the notion of go fast, fail fast, then evolve. Leading-edge designs have had to take on a lot of new technology to get to where it is today. Reticle limits resulting in migration to multi-chip designs, new memories and interfaces, and new compute architectures. The problem is that the software side is evolving faster than hardware. Much faster. Hardware is unable to keep up, and this is leading to almost reckless stretches that are also a setup for failure.
That might explain why some of the leading-edge designs are having problems. But what about the rest? They too are feeling the pressure from AI, and every company is being asked about their AI strategy. They perhaps don’t know exactly how they will use it, or the long-term impact that it may have, but they know they have to do something, and do it quickly, which leads to mistakes. This problem is compounded by a lack of stable third-party IP to help minimize their knowledge gap and risk.
The problem extends into the EDA space, as well. Here we see the immediate answer is to add AI, expend a lot more compute power to make a minor improvement in implementation. Another emerging use of AI is to remove process inefficiency that could be done much better by fundamentally fixing a problem. That is what is happening in functional verification.
This recklessness, which is associated with all aspects of AI, is extending way beyond just semiconductor development, where the new maxim appears to be – make big bold statements, wait for someone to point out the error, then modify. Rinse and repeat. Nobody will be remembered for making realistic statements or pointing out negatives. The canary will never be awarded a medal.
As an example, consider this recent quote from a respected executive, who argued that environmental considerations should not get in the way of winning the AI race. “We need energy in all forms. Renewable, nonrenewable, whatever. It needs to be there, and it needs to be there quickly,” he said, suggesting that AI will solve the climate crisis once the U.S. beats China in developing superintelligence. I will not name the individual, the reporter, or the publication, because there could be many errors that crept in or important context that was omitted. But the statement is almost laughable.
First, statements should never imply dependencies for unrelated things. I learned that when I had to sit for countless hours in depositions as an expert witness. Never answer a compound question.
Then, saying that we can ignore the very thing that is supposedly the end goal is like saying the industrial revolution did not impact our climate, nor have the supposed efficiencies resolved the situation. At least in Victorian times, they did not know what would happen. We know better. We cannot go on allowing AI to exponentially consume more power just so that we can have a better chatbot. I realize this is just part of learning how to make AI more powerful, but we must also consider the cost side of the equation.
As one example, many utility districts are running out of power distribution capability caused by additional data centers being built. They are warning that they will have to increase their investment in the infrastructure, and that will in turn cause utility rates to increase. Not acceptable. If it is AI forcing the construction of new data centers, they should pay for it, not the general public. In other words, the full cost associated with AI should be loaded onto it, and not everything else.
If anyone thinks the only goal is to develop super intelligence – at any cost – and somehow make sure that nobody else has it, they have learned nothing from history. Getting to an arbitrary goal without considering the consequences is irresponsible at best, and most likely unethical. And what is super intelligence? That is a movable goal because nobody can define what it means.
There is an entire chain of responsibility here. AI companies are complicit. Data centers are complicit. Semiconductor companies are complicit. Even engineers are complicit if they do not consider the environmental impacts of what they are doing and ask if it is providing real value.
In my opinion, it is time to slow down and talk about some real solutions to the problems that add true value. We should be thinking about hardware and software architecture as a single problem. We should be considering the power they will consume and how that power is to be generated and distributed. We should be thinking about the true value AI has to offer and not waste it on trivial needs. We should be re-evaluating our development methodologies to be more effective and more efficient. We should be asking how AI is going to improve the world.
Need some references for these statements: “In the past year, that has dropped from 24% to just 14%. Along with that, there is a dramatic increase in the number of designs that are behind schedule, increasing from 67% to 75%.”
I don’t think you can blame AI on this drop in manufacturable designs.
This is a great article Brian. I hope there is more innovation in the area especially UVM based verification. There is a startup called MooresLab which helps in AI driven verification. It looks very promising. The domain is ripe for disruption.
Compared with human design, AI’s capabilities are definitely limited, which is definitely one of the reasons for the failure.
We are certainly waiting with bated breath for more information from the survey, but it does not identify the cause for any of the trends. I understand that correlation does not mean causation, but we can certainly look at what changed during the period and see if it passes the sniff test. Another change during this period is a slowdown in the semiconductor industry. Could they be trying to cut costs? Possibly.
Thank you for your thoughtful and timely article, “Tape-Out Failures Are the Tip of the Iceberg.” I appreciated your clear articulation of the growing pressures facing semiconductor design and verification, especially in the context of escalating AI-driven demands.
One point I’d like to offer for consideration: while it’s true that large foundation models are resource-intensive, not all AI technologies are power-hungry. Many of the AI techniques increasingly embedded within EDA flows — such as predictive analytics, optimization guidance, and anomaly detection — are lightweight and domain-specific. Rather than adding to compute burdens, these solutions often help reduce resource demands and improve engineering efficiency.
Also, while the article suggests that AI provides only marginal gains, in some cases these domain-targeted AI applications are already contributing more than minor improvements — particularly in areas like coverage closure, bug triage, and regression optimization, where even modest percentage gains translate into significant time-to-market and cost advantages.
Again, thank you for raising such important topics. As always, I look forward to reading more of your insights as the industry continues to evolve.