Less Moore Means More Intelligence

The sooner we recognize the free ride is over, the faster we will begin developing truly innovative chips.


It would seem as if the entire industry is flooding the forums with articles about , as it reaches its 50th birthday (April 19th) and that this represents the longest and most important exponential in the history of man. The numbers and that impact are everywhere and I do not intend to repeat them. There are lots of articles talking about when Moore’s law will end, and I have written some of those over the past few years. The consensus appears to be in about 10 years, although there is divergence in this number based on technological limits and economic limits, the latter bringing about the more pessimistic numbers with many saying the free lunch is already over.

I believe that Moore’s law has made us all fat, dumb and happy.

We are designing systems with huge waste because it is the safe thing to do. We throw more cores at a problem and hope that someone will figure out, over time, how best to use them. It has created the attitude that transistors are free, even though the industry is now beginning to realize that more transistors is not a blessing in all cases. More transistors doing something useful means more power and more heat, and as those transistors get squeezed together even closer, the problems get worse.

In a similar vein, architectures have changed very little over the life of Moore’s Law. We still use the Von Neumann architecture for most of our computing and similar memory structures, although we did see the emergence of cache. Most custom hardware is built as an accelerator to a processor. Memory is a bottleneck, and while faster channels have been invented, the processor speeds have increased even faster. That means memory is becoming a big problem and has slowed down progress perhaps more than anything else in the default architecture.

With costs associated with each node going down (until recently, with an inflection at 20nm) and increasing amounts of reuse, saving us from having to design everything from a blank sheet of paper, we have managed to fill up those chips – an amazing testament to the EDA industry. While we like to lambast them at times, and I am sure things could have been better, they have managed to enable the industry to use all of those transistors is a fairly predictable and reliable way and helped us overcome many thorny issues. Someone should have created a similar law for the productivity of EDA software, and then perhaps they would have gotten more recognition and respect from the investment community.

Times are a changing
While it is clear that 14nm, 10nm, 7nm and probably 5nm will happen, there are many people who admit that 28nm will be around for a long time and that it may be the long term resting state for many industries, unless the economics of the smaller nodes changes significantly. If that is the case, then the industries that stay at 28nm will have to find ways to stop being commoditized. That means they will have to start getting smarter. They will have to re-examine things that have not been questioned for decades, including the best architectures for certain tasks.

We have been seeing a migration from single-core to multicore processors, but the complexity of these is hampered by the need for coherent cache and difficult programming models. The lack of progress on multicore architectures and programming models is the very reason why we need cache in the first place. Perhaps it is time we stopped thinking about memory as being the communications channel for software and instead start to look at message passing directly between processors. When communications costs come into line with processing costs, many things change and exciting new and power-efficient architectures will start to emerge. Also, new memories, such as ReRAM, will enable memory to be brought on-chip, replacing the power-hungry I/O with dense, power-frugal storage that can operate faster than DRAM and enable the use of cache as a speed enhancer to be questioned.

The end of Moore’s law, or even a slowdown, will set in motion many changes in the ways that chips are designed. I believe we can expect many years of exciting new architectures — some of which will succeed and some of which will fail. In change, there is additional risk, and when the bets are well placed it will lead to large gains. With gambling, the house always wins. In semiconductors, the winners will be the users and those who try. The ones who stick to the old architectures will become the companies of the past.


Michel Courtoy says:

Nice thought provoking blog, Brian. I agree with the concept: we have relied for many years on device shrinking as the driver of the semi industry. It is not about to change because it is the less risky and less costly path. And there are big forces pushing really hard to make sure this continues: the chip companies (fabless and IDM’s), the foundries, the EDA/IP suppliers, the fab equipment vendors… That’s a lot of brain power and resources working to perpetuate the existing system. Creating a revolution is very hard because it requires many elements in the ecosystem to change.

Brian Bailey says:

I agree with you that the system will do everything it can to resist, but there are some changes that are beginning to creep in, such as new types of memories that could spawn very different ways of thinking. Also if one or two start-up companies produce designs that are significantly better by looking at the problem differently, I believe it will cause the old guard to re-evaluate. I could be wrong, as I often am, but I hope I am right because it will make the industry a lot more interesting.

Bill Martin says:

Michel and Brian: A non-startup example from Brian’s response would be Apple. How did the iPod/iTunes, iPhone and iPad change the landscape in portable music, cell phone functionalities, portable computing and creating a large apps ecosystem? Apple’s financial results along with any competitor in these areas had to modify their business and product roadmaps/development processes. Success using older technologies will be replicated by others so they can compete (either on cost basis or how quickly new products are released to market using established technologies)

Leave a Reply

(Note: This name will be displayed publicly)