More choices and market consolidation are raising questions about integrity and completeness of ecosystems with each new process node and process flavor.
For years chipmakers have been demanding more choices. They’ve finally gotten what they wished for—so many possibilities, in fact, that engineering teams of all types are having trouble wading through them. And to make matters worse, some choices now come with unexpected and often unwanted caveats.
At the most advanced nodes it’s a given that being able to shrink features and double pattern with colors is no guarantee that everything will work as planned. On top of that there are more process, packaging and materials choices stemming from uncertainty over lithography. But at the most advanced nodes—10nm and 7nm—complexity is now overwhelming development schedules to the point where process technology, materials and design rules may be in flux right up until tapeout—or worse, after tapeout.
Things aren’t necessarily better at established nodes, either. There are multiple new process flavors being introduced at older nodes because of the need to reduce power or add more granularity into power/performance tradeoffs at those nodes to preserve battery life. IP may not be fully characterized or even available on new flavors of these processes, or on similar processes at competing foundries. For IP vendors, choosing which processes to support from which foundries at which nodes is an expensive guessing game, and not all of them are in sync at all nodes.
“We have a large team of program managers whose purpose is to manage an enormous matrix, where one axis is the process and the second axis is the process IP,” said Chris Rowen, CTO of the IP Group at Cadence. “You need to look at every IP on every process node, and the process nodes are changing rapidly.”
Even at established nodes, though, new flavors of processes can affect IP performance and interactions with other IP.
“It’s a lot of logistics to adapt IP and keep track of the different variations,” said Rowen. “Even with the same foundry you don’t know until you’re done evaluating a process how much effort it will be to re-characterize the IP. You need to test it on new process files, and there is no guarantee that the process you depend on for the IP to work is even closely related to the previous version. If you upgrade the process, even if it is nominally the same library, you have to re-characterize it. There’s a lot of hard work in that. The whole industry works to concentrate the changes to clock generators, A-to-D and D-to-A converters and SerDes, and that’s where a lot of the IP characterization is done. But there’s also RAM, which varies depending on what you integrate it with, and standard cells, which need to be re-analyzed in each context. You may know there are differences, but there are no bounds on how big those differences are.”
That view is echoed by other large IP vendors.
“There is no shortage of options,” said John Koeter, vice president of marketing for IP and prototyping at Synopsys. “You can’t stay current with all of them. We stay close to the lead customers and we start with the foundries at version 0.1 of the PDK. We also do IP roadshows, where we talk to 150 customers and interview them about their IP needs and aggregate that information back and it helps give us clarity on what the market needs.”
Nonetheless, he said there is a limit to how many of the new processes any IP vendor can support.
“There’s a problem with the IP ecosystem keeping up,” Koeter noted. “The foundries are offering lots of options these days. A lot of times IP vendors have to determine if they can move to a new node. And it’s not just third-party IP vendors. It’s also in-house IP. This is driving a lot of companies that used to build IP internally to look at buying it externally. If you look at Taiwan, they’ve traditionally done a lot of their own hard IP, but they’re finding that if they need to do one chip in process A and another in process B, they don’t have enough people.”
Pick a number
IP is only one factor in the rising tide of confusion. Numbers that are used to delineate the process node can vary greatly from one foundry to the next. So do companies build chips using a 14/16nm finFET on a 20nm back end of line process, or a 14nm finFET using a 14nm BEOL process? And does it really matter to anyone outside of the engineering community?
The answer isn’t always clear. The process varies from front, middle and back end of line, affecting everything from performance and leakage current to materials and equipment required at each step of the design through manufacturing process. The key is the total value proposition for moving forward on any foundry’s process technology, rather than the node number. Moreover, a new transistor structure such as a nanowire FET will likely be introduced after 7nm, regardless of whether one foundry calls it 7nm and another calls it 5nm.
“People call 20nm and 14nm different things,” said Gary Patton, chief technology officer at GlobalFoundries. “And 10nm will be a long-lived node because there is a lot of value proposition there. One reason is that at 5nm finFETs will run out of steam.”
There also are competing materials. GlobalFoundries and Samsung have both rolled out FD-SOI at 28nm, and GlobalFoundries, Soitec, STMicroelectronics and Leti are backing a new FD-SOI process at 22nm that they claim is lower voltage and equivalent performance for finFETs. “The bulk of the industry is still at 28nm, so the move to 22nm FD-SOI is a huge opportunity,” said Patton.
Still, there is plenty of confusion among chipmakers trying to decide which foundry to use for an upcoming product.
“The description of the process, whether it’s 14nm or 16nm is similar to how many equivalent gates there are in FPGAs,” said Mike Gianfagna, vice president of marketing at eSilicon. “What you need to know is what does it apply to, what are the design rules, and whether there is a PDK.”
All of this confusion has forced companies to work more closely together at much earlier stages in the design-through-manufacturing flow—IP companies, EDA vendors, chipmakers, and foundries.
“You need to capture the main constraints,” said Juan Rey, senior director of engineering for Calibre at Mentor Graphics. “The issue for the foundry is to make sure complete development is optimized for each process. The industry keeps pushing for the same development infrastructure. However, when you get into production it’s not usually optimal. Manufacturers work alone, ahead of everyone else. But neither the manufacturer, us, nor the customer has the full chip design. It’s not until those designs are received, and the pieces are put together, that you know all the performance characteristics.”
This is particularly important at advanced nodes, where progress is becoming much more difficult.
“If you look at the research community, which is like a second source for the semiconductor industry’s pre-competitive research, every two years they get together to review the research portfolio,” said Rey. “If you look at how to continue Moore’s Law, which is a simple equation, you need smaller, lower-cost transistors with better power and performance. That’s not happening. But the industry as a whole recognizes that large volume market areas still have a need for a Moore’s Law road map, even if they don’t get gains in performance and power. There are large sectors of the industry that can justify going smaller with more density per die and less power, and that pre-competitive research is being done to enable the industry. But we will continue seeing a number of flavors of how to get there.”
There is a strong likelihood that fewer companies will be able to justify each new node, though, because the cost of development is going up and the markets that have traditionally driven that development—PCs, mobility—are maturing.
“There is not as much technology turn as before, so to do 16/14/10/7nm, you have to have really big volumes,” said Charles Janac, chairman and CEO of Arteris. ” There are roughly 230 companies designing SoCs right now. Should all of them be doing SoCs? Probably 100 of those should not build SoCs, so you’re going to see a lot of mergers. People who are not the winners in a volume game merge or they become IP companies using platforms or they use FPGA SoCs. This transition takes a while to go through, but it’s happening.”
What the industry will look like after that is anyone’s guess. Janac says it could turn into an oligopoly of very large companies, or there will be another flood of innovation around new applications, which could be anything from self-driving cars to wearable devices. But waiting for the next big thing(s) also creates its own level of confusion, because it can take six to eight years to get chips from conception to production and into products. And a lot can change in eight years, particularly as companies start up and others consolidate.
“The real megatrend is not consolidation,” Cadence’s Rowen contends. “The big shift is the degree to which companies are going outside for IP. Underlying all of this is a gradual shift of make versus buy.”
All of this has to be viewed over time, though, and from a much broader vantage point. While large systems vendors such as Apple and Google have taken more chip development in-house—prompting some very high-profile M&A activity—they also have stepped up their purchases of third-party IP. At the ground level, within a limited time frame, what this all means is like viewing a jigsaw puzzle with most of the pieces still missing. Everyone knows something is changing, but no one is quite sure what the final picture will look like.
“Consolidation causes confusion,” said eSilicon’s Gianfagna. “But in confusing times, if you can still think clearly, you can win.”