As the market begins shifting toward more vertical solutions, methodologies, tools and goals are changing on a grand scale.
A shift is underway to develop chips for more narrowly defined market segments, and in much smaller production runs. Rather than focusing on shrinking features and reducing cost per transistor by the billions of units, the emphasis behind this shift is less about scale and much more about optimization for specific markets and delivering those solutions more quickly.
As automotive, consumer electronics, the cloud, and a number of industrial market slices edge toward more targeted designs, the metrics for success are shifting. Large SoCs glued together with software will continue to push the limits of Moore’s Law in high-volume markets such as mobile phones, but it’s becoming clear that is no longer the only successful path forward. Squeezing every last penny out of the design and manufacturing processes is proving less important in some markets than providing a compelling solution within a given time frame.
This is hardly a straight line, though. Customization—or at least partial customization—is proving to be a different kind of technology and business problem. The emphasis more often than not is on how chips behave in the context of a system and within acceptable use-case parameters, rather than just meeting specs for power, performance and cost. And it has set in motion a series of changes in how design tools are used, in the tools themselves, and in how companies prepare for these new market-specific opportunities.
“In the past, the semiconductor industry was all about finding a market where you could make one design for a huge market that gave you economies of scale,” said Zining Wu, chief technology officer at Marvell. “Two things are changing. First, as we get further along in Moore’s Law, cost is becoming a problem. So you have to do things differently. And second, the trend that is becoming apparent with the Internet of Things is that different segments require different products. So you’ve got a prohibitive cost on one side, and more fragmented markets on the other.”
As the ante at the leading edge of Moore’s Law goes up, the number of chipmakers pushing to the next node is diminishing. Acquisition and consolidation are a testament to just how difficult it has become to stay the course of shrinking features. But even some of the die-hard followers of Moore’s Law, as well a number of smaller players, are exploring other options in different markets. Chips developed at the most advanced process nodes still account for the largest volumes of chips produced from complex designs, but they no longer represent the largest volume of designs.
Jean-Marie Brunet, marketing director for the Emulation Division at Mentor Graphics, calls it the beginning of the Applications Age. “There is no limit to verticalization. Hardware alone is no longer the differentiating factor.”
Brunet said the key concern on the design side as these new vertical market slices are created is eliminating risk. “You want lower risk at tapeout. If you have a new application and you address a solution to new users, that is about decreasing risk.”
Part of that risk is missing market windows. Tom De Schutter, director of product marketing for virtual prototyping at Synopsys, said there is time-to-market pressure building in markets that never grappled with this issue before. In automotive, for example, the average design time used to be seven years, with a five-year turnaround considered to be aggressive. In some cases, notably in infotainment systems, that window has shrunk to as little as a year because anything longer than that is outdated technology.
“One of the big changes is a shift from design to integration,” De Schutter said. “Design is still important because you need something to integrate. But the focus on integration is new. This is not just about developing a massive SoC. There are a lot of different platforms and subsystems, where you provide the application needed for a specific market.”
This certainly doesn’t mean the market for advanced SoC designs is weakening. But it does mean there are many more opportunities developing around the edges that are focused on different goals.
“Things are changing for a subset of the market,” said Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “If you look at the ITRS data, cost is still a basic hurdle to overcome and tools are developed as a way to make designers more productive. That will continue to happen. But there also are a larger number of smaller designs coming to market, and the challenges there are much different.”
Faster, more integrated tools
One of the benefits from this shift is that faster and better integrated tools are being used across a much wider class of designs. This has always been a core consideration for the EDA market, which is why emulation sales now represent a sizeable portion of all of the large EDA vendors. But emulators are now being sold in conjunction with FPGA prototypes, and they’re being integrated with tools from across the flow. Simultaneously, EDA vendors are investing more and updating and expanding what their tools can and how quickly they can do it. This is obvious for finFET-based designs, where there is more to analyze, but it is being applied to other sectors, as well.
“As you move from 22nm to 16nm, line-widths change, electromigration rules change, and as you go from 2D to finFET, drive strength changes,” said Aveek Sarkar, vice president of product engineering and support at Ansys. “You have local self-heat. There are additional metal layers, so the heat trapped in dielectrics increases. All of this can worsen the life of a chip, so you need a better temperature profile.”
That requires much faster system-level simulation, whether the system is defined as a chip or a system that includes that chip. But either way there increasingly are contextual considerations, such as where and how a chip will be used, and a list of physical and electrical conditions for the entire system. As chips are targeted at new markets, those factors need to be considered on a much larger scale than before, particularly when chips are being designed for automotive, medical and industrial applications.
“What we’re finding is that everything is changing at once,” said Bill Neifert, director of models technology at ARM. “A lot of it is in the larger scale applications of things we’ve been doing in the past. So with front-end processing, there is an acceleration of capabilities. There are more subsystems where you bundle IP with software and that has to be validated. Almost every company now has at least one emulator. Virtual prototyping used to be a ‘nice-to-have’ but now it’s a ‘must-have.’ And it’s not just for software development as companies are using virtual prototypes at multiple points in the design cycle. There has been an expansion of all these things.”
Neifert added that these demands have spread well beyond the mobile market into a custom application and chip. “Even with all of the consolidation, we have not seen a decrease in overall activity.”
Part of the effort to speed up designs also involves more expertise from more sources. One criticism that has been made repeatedly by EDA insiders is that customers do not have the expertise to take full advantage of the tools. That has generated two different approaches. First, services are frequently offered, and sometimes bundled, with tools and IP.
“The key thing for modern verification platforms is scalability and reusability to operate with different flows, languages and vendors,” said Zibi Zalewski, general manager of the Hardware Division at Aldec. “The more complicated project, the bigger number of different elements that need to talk to each other. All those elements result not only in tight tools integrations and optimal data exchange, but also in very close cooperation between design companies and EDA tools providers. It is no longer a tool deal only. It is widely understood to include engineering service, the tools, IPs, automation and overall expertise to help the partner with the project challenges. Tight schedules and multi-discipline projects force team members to focus on their own part and use the wisdom and experience of others. EDA tools provider must be ready not only to help with tool operation, but also to customize the tool and IP for the on-going project, becoming actually a project active participant with a strong influence on schedules and deliveries.”
The second approach is to make the tools more reflective of how engineers actually use them rather than how tools vendors think they should be used. “Most emulator use models are RTL, and you go from RTL to gate with a synthesis process,” said Mentor’s Brunet. “But when you get an ECO because the silicon comes back with a bug, you make a fix to the netlist but you don’t go back to RTL. So you need a robust gate-level flow.”
Beyond correct by construction
One way to cut time to market and reduce costs is to get a chip right the first time. Re-spins are expensive in terms of engineering resources, but they are even more costly in highly competitive markets where a difference of a few months could make the difference between who wins and who loses a deal. This has given rise to the often-cited “correct by construction” idea, which in theory sounds great. Reality is usually rather different, particularly in complex designs where correct by construction rarely happens on the first try. One chipmaker insider described it as “a fantasy.”
Nonetheless, there are things that can be done to minimize the impact of engineering change orders (ECOs) and bugs that are found too late in the process to effectively do anything about them in hardware. That is getting a second look, both by chipmakers and tools companies.
“One of the most neglected areas is architecture,” said Sundari Mitra, CEO of NetSpeed Systems. “Right now, EDA starts at RTL. What’s missing is automation and algorithmic content. Companies have been used to taking spreadsheets and verifying if they’re correct and whether they meet performance. That does nothing for the SoC construct, bandwidth and latency. Those need to be part of the architectural design.”
Mitra noted that innovation in the mobile market has been an evolutionary engineering challenge. “With the IoT and automotive, it’s revolutionary. If you look at a car, it will be able to sense if a driver is asleep and it will be able to sense that at different frequencies and transmit that to automated controls. And it will all be merged into one or two chips. We need to change how we think about putting chips together.”
This is important for another reason, as well. Consumers and businesses are demanding the same kinds of capabilities in devices that are now available in portable devices such as smart phones and tablets—the ability to stay current through regular updates.
“People are selling cars sooner just to get new features,” said Kurt Shuler, vice president of marketing at Arteris. “That requires much more flexibility in hardware and software. This used to be a waterfall development process, where you went from one design process to the next. That’s starting to change everywhere.”
Different packaging options
One related change is in the packaging, and this is happening across high-volume mobile markets as well as vertical markets. Big processor companies such as IBM and Intel, as well as a number of networking chip companies, all publicly have embraced 2.5D packaging as a way of modularizing and re-using design components. Even Apple reportedly is using a fan-out package for its next-generation iPhone, and AMD has been selling a 2.5D graphics chip since last year.
But this will take time to catch on beyond early adopters in price-insensitive markets. “What we’re finding is that customers don’t want to take on too many new things at once,” said Mike Gianfagna, vice president of marketing at eSilicon. “So they may move to 2.5D using an interposer, or they’ll do monolithic cores on a substrate. They might try one new thing, but they’re not going to do them all. So they might use a different packaging strategy or they might decide to use multiple cores on a single chip. But throwing in everything at once is too risky.”
Gianfagna noted that one of the big drivers for this change in the high-volume markets is that the 28nm design flow isn’t working for 16/14nm. “Verifying 2.5D is not all that complex,” he said. “But if you’re doing 16nm chips—and we’re working on those now—it requires substantially more resources. Those are larger and more complex designs. You’ve got double and triple patterning, timing closure issues, different parasitic effects.”
Marvell has taken a different slant on this problem, developing its own serial interconnect and software and providing customers with a menu of modular chips (MoChi’s) that can be customized quickly for various vertical markets, including those where semi-custom chips will be sold in lower volumes.
“The challenge was to make the serial IP robust enough that it could support different MoChi’s in the same package or across a PCB in two chips,” said Wu. “The software is related, but the underlying physical IP is transparent to the software. The MoChi’s are connected at the bus level. So to make the system work, you have a northbridge (CPU communication) and several southbridges (I/O).”
He noted this will work in 2.5D through an interposer, or in a fan-out package.
Targeted solutions, with semiconductors as the core components, will continue to enable more vertical markets as the economic and time-to-market equations shift. Massive changes are underway across many market segments, and they will drive sales for existing chipmakers, tools companies, packaging houses and foundries, as every facet of the industry begins to adapt to new opportunities.
However, this isn’t entirely a big chipmaker’s game anymore. The price of entry is no longer based on the ability to develop a finFET at 16/14nm. Increasingly, it will include the ability to leverage market expertise and knowledge about specific vertical needs, using pre-developed subsystems or platforms, new ways to put them together, and perhaps even the most advanced tools will be delivered as a service. Shrinking features and cramming everything onto a single die is one strategy, but it’s no longer the only one. And that will become increasingly clear as new market solutions are developed faster than ever before.