Survival Of The Cheapest?

Who will win in the battle for AI architectures? Want to place your bets?


We all want the best solution to win, but that rarely happens. History is littered with products that were superior to the alternatives and yet lost out to a lessor rival. I am sure several examples are going through your mind without me having to list them. It is normally the first to volume that wins, often accelerated by copious amounts of marketing dollar to help push it against headwinds.

The same has been true in many cases within the semiconductor industry. Was x86 the best? It certainly won and now the cost to displace it is very high, except for new applications where an apples-to-apples comparison can be made. For many cases, the first to volume is the one where the costs are driven down, making it more affordable to the masses. We see this time and time again.

Today that is happening with 2.5D and 3D integration technologies. Right now, these technologies are way too expensive for most people, and only a few high margin industries can make use of it. But as they do, problems get resolved, yields go up and costs come down until it becomes accessible to increasingly large markets.

This behavior often has favored the most general-purpose solutions because they have the widest audience. It is tough to get to volume when your market is a narrow niche. But that inevitably leads to those smaller volume applications being forced to accept less-than-optimal solutions.

Large industrial and systems companies have tried to overcome this by being more vertically integrated. While no one application in their portfolio may have high enough volume, if they can amortize a solution over a range of products, they may be able to get high enough volume to bring costs down and allow for some degree of differentiation that gives them an edge. Many times, this strategy has failed because those centralized development teams lose sight of the goals and quickly become a non-competitive cost center.

With the rapid rise in artificial intelligence (AI), we now see this happening again. It is not clear what architecture will win, and somewhere around 100 semiconductor companies are betting on becoming the one to make the breakthrough and capture the majority of the market. We all know that it will end ugly for most of them. But there are some things that are different this time.

Consider the rise of a whole new type of company that is actually driving a lot of the need and the development for AI. Companies like Google, Facebook, Microsoft, Apple, Amazon and others see the semiconductor content as a means to an end. And while potentially providing a competitive barrier, the semiconductor content is not expected to make a profit. It is an enabler.

This technology is advancing so rapidly that any product developed today may never see volume. It is merely a learning vehicle to take us to the next level. Applications are not satisfied with the compute power available by today’s solutions. They are looking for more. And when they get it, that opens new markets and new opportunities, which present a new set of challenges. It would appear that we are way too early to give anyone the winner’s medal. These are only the initial heats of the race to come in the future.

Because of the cooperation, that has evolved in this market. There is more willingness to be open and to share advances. That has led to much higher likelihood that open source software and hardware will become important. While that may just say that the open source solution is the winner this time around, it also means that the solution has to be flexible enough to serve all markets, and that means customizability.

Finally, we have to consider that the value is moving to the data and what you can do with it. That may mean that we never get to a standard solution, but only to the best solution for a given data type. We have only just started to consider how valuable that data may be. And at the same time, people have started to worry about the amount of data they may be giving away for free.

We can never predict the future, and every time we try, it looks to be different than anything that has happened in the past. But it does appear that there are so many variables at play this time around, that the answer may be very different than an extrapolation from the past.

Leave a Reply

(Note: This name will be displayed publicly)