Predictions should not reflect innovation or breakthroughs. They should be based on pain.
At this point everyone has made their predictions for the year, but there is one thing many people get wrong. Predictions are not about innovation. They are about pain and what is causing it.
This industry is risk-averse, and everyone wants to continue doing what they are doing. But there comes a point when it’s so painful to continue that something has to change.
Having something that is a 10X, or even a 20X improvement over something considered non-essential isn’t worth talking about. It is like Amdahl’s Law. No matter how much you speed one thing up, the total speedup is still constrained by the bit you can’t improve.
In my previous life, as a technologist for a large EDA company, I was part of the engineering team. We looked at the kinds of things that marketing said they wanted, finding the best ways to solve those issues, and then helping marketing and sales to be successful with it once implemented. In general, I have to say I failed.
I always was considered the eternal optimist because I couldn’t understand why ESL (electronic system-level design) wasn’t taking off. The reason was that people had bigger issues they were dealing with. While they could see aspects of what ESL might provide them, it didn’t help them get the current product out the door. In addition, the startup cost of a new way of doing things was often considered to only be possible if you had a substantial lead over the competition, and failure would not mean losing your edge.
Over time, I became more a part of the marketing team, trying to understand the pain levels of customers, and used that to determine where we should invest engineering attention. Customers would never talk about it in those terms, but you had to try and find out what was holding them back and limiting progress. Of course, this always had to be separated out from bugs, which were always the most pressing pain point, and speed and capacity improvements, which are required to relieve the pain. And we had to anticipate pain for the next generation product.
Technical advances are a necessary, but not sufficient, reason for adoption. In addition, it must have value in terms of cost, time-to-market, or advantageous performance. Beyond that, there has to be pain. While that sounds obvious, it isn’t quite that simple.
A new tool will add to NRE. That cost must be recovered elsewhere, be it in another location in the flow, or in area which impacts production cost. The amount of cost savings must be sufficient to overcome added risk. Many startups that have fallen by the wayside blamed big EDA pricing models, but all companies I know will spend money if they know it will save money.
A speed improvement in one area of a flow that is not on the critical path does not alter the time to production unless the team that now finishes early can help out in other areas. This is often not possible. If the improvement is made in verification, it just means that additional resources are available to lower risk. Even formal verification has had problems with adoption because it is difficult to put a value on finding a bug earlier in the development flow that would have been found later using existing techniques.
There is some performance that matters and some that doesn’t. For example, if a system is memory bandwidth-constrained, there is no point speeding up a processor. Also, there is no point trying to make a standard interface run above the defined performance because you lose interoperability. If you are in a closed environment, that may not matter, and absolute performance at any cost is better.
Pain is a component of all of the above. It simply says, ‘Why bother trying to improve something that nobody cares about, or does not have a defined benefit that swamps added risk?’ Without pain, even the most incredible technical advances are likely to be ignored.
Even the biggest and best companies get it wrong time and time again. One EDA company publicly said for many years that chiplets would never become popular, and that all of the major semiconductor developers would continue to prefer monolithic silicon. That last part might be true, but the pain became too great and chiplets provided relief.
Another company predicted that HBM would never succeed because it was too expensive. Fast forward a few years, and HBM manufacturers can’t keep up with demand. For some markets, particularly those involved with AI, cost was not a constraint, but memory bandwidth and latency were.
I see a similar thing going on today with UCIe. The existing user base is trying to predict the pain points they will have in the future and attempting to collectively solve the problem today. It’s a very noble goal. In the past, teams like the International Technology Roadmap for Semiconductors (ITRS) faithfully produced a report highlighting what researchers and developers should be working on today to meet the needs of tomorrow.
Back to UCIe, and while those users are perfectly placed to feel current pain, they are nonetheless attempting to make predictions about the future and to project that to the rest of the industry. The needs and pain points for the rest of the industry are very different.
That doesn’t mean that UCIe is bad. It is a learning exercise. The standard provides guidelines for IP developers to follow because without the IP, the industry cannot move forward and the IP developers cannot hope to make money. This is the only way the industry is likely to be able to overcome the pain points they have today and those they predict in the future. What we do have to remember is that UCIe is a prediction and treat it as such.
Leave a Reply