Improving The PPA Equation

The focus on existing process technologies will have big impacts on SoC design without big cost increases.

popularity

The next generation of semiconductors may look very much like the existing generation. But like the old Porsche ads that required arrows to point to the improvements, because from the outside things basically looked the same, there should be plenty of impressive stuff inside.

As the cost per transistor continues to rise at advanced nodes, the focus for most companies is no longer about shrinking features. It’s about improving what’s already there, through better software, verification, testing, floor planning and packaging. This may sound like more of the same, but it’s definitely not. In fact, there may be more improvements available using existing process technologies than by migrating to the next node.

Whether this constitutes the end of Moore’s Law is academic. Some companies—IBM, Intel, Samsung, Xilinx, Altera, and some DRAM companies—will continue to push to 10nm. A few will even push to 7nm. And some of those advanced processors may show up in packages with other chips, connected either with wires or interposers or through-silicon vias.

What’s more important is that progress will continue on all fronts. And there will be more players developing chips at older nodes because the cost of entry and development will be significantly lower, the yields will be higher, and the possibility of developing more reliable chips with better performance and lower power will be much higher and more consistent.

Reliability is a big deal. It saves money for the chipmaker, and it builds brand loyalty. The more 9s in the percentage of uptime, the easier it is to charge a premium. We haven’t seen those kinds of numbers in the consumer market since cell phone carriers began a two-year replacement cycle. But with more electronics in more things, doing the extra testing to ensure a device will work for the expected lifespan will keep end customers coming back. Having finFETs in a device doesn’t mean anything. Having those finFETs or 2D transistors work flawlessly does.

Of course, having more cores in a device doesn’t always mean anything either. Most software only takes advantage of one or two cores at a time with any effectiveness, but for some reason that seems to be the successor to millions of instructions per second. Reliability is still an easier sell, though. And the way to achieve that is by using better tools than a spreadsheet, automating much of the design process to catch more bugs early, reduce human coding errors, and effectively spending more time per design by utilizing automation where necessary, and engineering skills where it isn’t.

Because yields are higher at established processes, IP is already proven and the physical effects are well understood—not to mention the fab equipment is cheaper and depreciated and the tools are well understood—this is an obvious direction. It’s also a big opportunity to build much better electronics, to integrate them with the most advanced electronics in sophisticated packages, and to move the industry forward at rates that make sense and in combinations that can open up new markets to new players and more innovation.

For the first time in years it appears as if real progress is being made on all fronts, and it has taken rising complexity and skyrocketing costs to point the way to a much simpler solution.



Leave a Reply


(Note: This name will be displayed publicly)