As the IoE kicks into gear and Moore’s Law slows down, what’s next?
We all know that sub-10nm is coming. But is that really what will define the next generation of semiconductors?
Progress in semiconductor technology increasingly is not just about advancements in the hardware. It also involves advancements in applications and technologies peripheral to the devices themselves. That may sound counterintuitive, but going forward the technology, applications and software that are combined to produce bleeding-edge performance won’t necessarily be tied to just raw performance of silicon.
In fact, there is a whole slew of metrics regarding how technological improvement in semiconductors affect the device development to which they apply. For example, how much demand will there be for 10nm circuits if the cost of a wafer is $500,000?
“Product designs face three concurrent problems,” says Greg Yeric, an ARM fellow. “There’s cost, of course. But also performance. And that has to be within a power budget. Almost all products face the power problem now—whether in terms of energy bills and cooling costs in a data center to touch temperature of a mobile phone, to energy harvesting budgets down in the Internet of Things.”
Winds of change
There is almost universal agreement that Moore’s Law is slowing and becoming more difficult to follow. Whether it is ending is a matter of ongoing debate that has spanned decades. But there is no doubt that it is prompting some seismic shifts.
“The industry is going through some tremendous changes,” observed Jim Aralis, CTO at Microsemi. “We are going through a change in where things are being done and how things are being done in a relatively unprecedented way. The traditional way in which the industry designs and used circuits is changing because of the capabilities of the process and related costs.”
ARM’s Yeric agrees. “With the slowing of Moore’s Law, more radical change becomes feasible,” he says. “At the chip level we see an increasing use of heterogeneity, analogous to the circuit level, in order to advance product scaling.”
Memory also has suddenly become much more important, both in terms of the type of memory that works best and the data path architecture for getting data in and out of memory more quickly. Rambus’ move into DRAM memory controllers is a case in point. The challenges of prioritization and optimization have become so complex that last year the company began making hardware rather than just selling IP for that hardware.
New architectures such as the Hybrid Memory Cube and high-bandwidth memory add literally another dimension to designs to speed throughput. Even embedded flash is beginning to shift toward more secure and higher-density alternatives, such as one-time programmable memory where the setting for individual bits is locked by a fuse.
“With the IoT, particularly with industrial and new consumer applications, density is becoming a critical issue,” said Jen-Tai Hsu, vice president of engineering at Kilopass . “There are a lot of applications that are code-intensive. You have to put a lot of code inside the memory for on-chip computation. The IoT isn’t about dumb devices anymore. There’s a lot of on-chip, real-time computation. But the issue isn’t computation speed. It’s the amount of data that needs to be stored, and for that you need more density.”
Hsu said it’s the same for servers in the cloud and routers at the edge of the network. All of this requires real-time computation, and that in turn requires more density in memory. “There are a lot more dimensions than in the past. You used to just shrink the core application processor or the MCU. That doesn’t work anymore.”
Beyond hardware
The changes are not confined to hardware, either. They are propagating though the entire semiconductor ecosystem. The traditional path—and this holds true for many industries—is to make things smaller, faster, and denser (cheaper). And without the IoE looming in the near future, that might still be a valid methodology. But in this brave new world, new paradigms are surfacing all the time. Having a perpetual cycle of shrinking features no longer applies in all cases.
There will continue to be progress in materials, lithography, and manufacturing, as well as in finFETs and single-nanometer geometries. On top of that, the usual metrics of lower power, lower cost and higher density will remain important. But they’re no longer the only important benchmarks, and in some cases they’re no longer even relevant.
Changes in the processes and the associated costs are shifting the fundamental business equation. “Chips are not where the innovation is happening as much anymore,” says Aralis.
Chowdary Yanamadala, vice president of business development for ChaoLogix, agrees. “While all the innovations in the semiconductor fabric will still be very important in the coming years, one of the less visible yet very important trends is the evolution of innovation in the business model.”
That includes new ways to deploy existing technologies, as well. Yanamadala believes most of the innovation will come at the system, application and software levels, and that is particularly relevant to the IoE where low-end, low-complexity devices with limited functionality and real estate will be married to complex servers and network devices.
For the edge devices, the main tasks might be confined to identification and a few simple instructions. How complex can smart socks or a toothbrush be and still maintain economic viability? For devices at the other end of the spectrum, such as connected vehicles, infrastructure management, telecom, medicine, military, and other high-end, less price-sensitive segments, finFETs, nanotubes, and single-nm gate topologies are going to find homes. But even in such complex and expensive ecosystems there will be cost pressures and a need for low-end chips.
It’s no secret that designing a bleeding-edge chip from scratch is cost-prohibitive today, which is why there has been such an explosion in the third-party IP market. “At the HDL level, not even at the transistor level, where it is code, a 7nm design would cost nearly a half billion dollars,” says Aralis. “That means more and more chip design will happen at the system level.”
The number of different chips made at a sub-10nm level increasingly will be limited by the applications that require a reduction in feature size. Even if the industry starts to expand the development of such apps, the cost of early development will be enormous. That translates into more programmable solutions, new architectures with advanced packaging such as 2.5D and fan-outs, and special-purpose FPGAs with a high number of hard IP processors with programmable fabric.
There is a spectrum of these solutions coming to market, beginning last year with Marvell‘s MoChi or modular chip architecture and TSMC‘s integrated fan-out wafer, or InFo, which actually was around for several years before interest in this approach began booming.
Sehat Sutardja, chairman and CEO of Marvell, said the driving forces were simpler choices, significantly faster time to market, and more consistency. In TSMC’s case, the drivers were similar, but there is more work that has to be done by the chipmaker before it is packaged together by the foundry.
Both of these approaches rely on an acceptance that third-party IP will become a bigger component in many designs. It will determine what can be done, how much it will cost, and eventually which foundry will be part of the design loop. As foundry processes continue to diverge, and as they put their stamp of approval on more third-party IP, the number of IP blocks they will support (as well as the number of IP vendors that can afford to keep up with their process changes) will shrink. That also has led to speculation that some foundries, particularly specialty foundries, will offer some of their own IP in the future
“It is extremely expensive to generate critical IP. It is even expensive to just move it,” notes Aralis. Therefore, the process selection is made by the metrics of how expensive, and how available is the IP.
One example of that is a 100 GHz ADC. There are really only two processes today that are available for this circuit, so the only option is to go to the foundry for the IP. What makes this interesting is that the relationship with the foundry is secondary. If there is already an established relationship, all the better. But this is an entirely new paradigm for chip design and fabrication and a prime example of how economies of scale have changed the chip landscape.
For other approaches, the standard business model will become more along the lines of building IP and integrating it into FPGAs, packages, or complex SoCs. Or vendors will build algorithms that will run on various types of processors.
Technology still matters
While business changes are critical, so is pushing the envelope on technology. This is increasingly important for security reasons. While it may not be necessary to deploy the bleeding edge in many cases, there are always some cases where it is required, such as finance, government, medicine, the military, transportation, space, telecom, and other segments. Put simply, if you are not the lead dog the view never changes and you cannot see what is coming.
While there are several areas in semiconductor development that are brushing the bleeding edge, one of the more exciting is materials. Graphene has captured much of the mindshare on the nanotechnology front, while III-V materials continue to dominate discussions at advanced nodes because of the difficulty of moving electrons through increasingly narrow wires.
Graphene is a single-atom-thick sheet of carbon with highly desirable electrical properties, flexibility and strength. Is the strongest material ever tested and more conductive, with a resistivity of about 35% less than the purest silver. Primarily, it is an efficient conductor of both heat and electricity, and exhibits bipolar effects.
It does have one fundamental drawback, however – no bandgap. This is due to its symmetrical structure. Where sometimes a symmetrical structure is desired, in the case of graphene, that symmetrical structure actually causes its atoms to scatter electrons in such a way that they cancel each other out, and that is difficult to control. However, there are some success stories in working around that. A graphic of one approach is shown in Figure 1.
Figure 1. Example of a FET with graphene. Source: Universiti Teknologi Malaysia.
Graphene is extremely thin and strong (the major reason it is so good at dissipating heat). Assuming a thickness of 3.35 Angstroms, it is about 100 times stronger than steel for the same thickness and 100 times faster. It also exhibits properties of ballistic transport of charges and large quantum oscillations in the material.
One of the biggest challenges in miniaturization of semiconductor devices, and electronic devices in general, is the smaller or the faster your device is, the more challenging it is to cool it down. This is one of graphene’s shining stars. It has very high thermal conductivity so it is able to quickly dissipate heat, resulting in cooler running circuits.
The Department of Energy’s Stanford Linear Accelerator Center (SLAC) looked at what happens to the various properties of materials when combining graphene with common types of semiconducting polymers. They found that a thin film of the polymer transported electric charge even better when grown on a single layer of graphene than it does when placed on a thin layer of silicon.
Standard theory for semiconductor polymers is that the thinner the polymer film, the faster and more efficient it is relative to electron flow. However, this set of experiments showed that a polymer film about 50 nanometers thick conducted the charge about 50 times better when deposited on graphene than the same film about 10 nanometers thick alone. If it can be mass produced cheaply and consistently enough, that changes the game for graphene semiconductors.
It also has far-reaching implications for wearable electronics, where flexible materials will be a huge enabler for a wide variety of devices. Other application that are likely candidates are photovoltaics, medical sensors, and touch screens.
For generic semiconductor design, graphene can be rolled into a cylinder, basically becoming a “semiconducting carbon nanotube.” At room temperature, the mobility is more than 100,000 cm2/Vs (with potentially 200,000 cm2/Vs). Compare that to the roughly 1,400 cm2/Vs for typical semiconductors and 5,000 to 6,000 cm2/Vs for some of the latest technologies. That translates into extremely high switching speeds.
The implications of this for ICs is mindboggling. Not only can these devices turn on and off extremely fast, but with graphene’s enhanced thermal characteristics, generations of graphene-based chips can be super-fast, super-small and have fewer heat issues. It could make those 100 MHz DACs look like they’re standing still.
But its conductivity is still a bit too high for many applications. And, as was mentioned earlier, creating a band gap is still in the experimental stages. From a production perspective, unlike their silicon brethren, which are carved from large silicon crystals, this doesn’t scale to graphene nanoribbons, which are as narrow as 2nm. And they are expensive and rather complicated to produce. Finally, while the possibility of switching graphene nanotubes or ribbons is theoretically possible, it is a very fragile proposition because these devices are 100,000 time smaller than a human hair.
Conclusion
The changing landscape of semiconductor design is multi-faceted. There is the business model, the technology model, and the demand model. It is said that if you build it, they will come. We are seeing some cracks it that philosophy.
Even experts in the field are not always sure of where to place the bets. There are big changes on the design and the economic side, and there is plenty of experimentation underway to figure out what is the most cost-effective and best solution for a variety of applications. M&A, along with a number of spinoffs related to those deals, is evidence of that.
How the landscape will shift has some crystal-ball elements. Will some foundries become IP centers? Will the sub-10nm geometries become economically viable? Will chips become dumb and general-purpose, while the applications, IP and system-level designs become the apex of focus? These are all interesting questions, and so far there are no clear answers.
Leave a Reply