Remaking The Design Landscape

Analysis: The number of shifts that will occur over the next couple process nodes is unprecedented in the history of semiconductor design.


By Ed Sperling

Every now and then a new trend comes along in the semiconductor design world, often because an old tool doesn’t work well anymore or because a new one is achieving critical mass. Lithography moved to immersion when the wavelength couldn’t be refracted far enough anymore. Designers at the advanced end of Moore’s Law began using tools like high-level synthesis and Transaction-Level Modeling 2.0 to help sort out the complexities of multicore, multi-voltage, multi-power island designs.

What’s changing at 32nm and beyond is the number of different directions the industry is heading. In the past, each new node brought new changes. At 130nm, the changes were considered extremely difficult because manufacturing moved from 200mm to 300mm wafers, added copper interconnects and low-k dielectrics for insulation. Most developers and chipmakers heaved a sigh of relief when that transition was over. But in retrospect, that was relatively tame.

Interviews with dozens of engineers, vendors, scientists, researchers and business managers over the past six months show that what’s ahead cannot be bounded into just one or two shifts. The change under way now is geographically global. It’s moving to a higher and higher level of abstraction, from semiconductor to system to device. And it is as much driven by business as technology. Moreover, taken in total these changes will completely alter the basic fabric of the design community in ways that have never been seen before.

Behind many of the changes afoot in the market there is always a business case. In the past, technology trumped business. Those with steely nerve and enough backing could often carve out a space for themselves in markets, and even if they weren’t entirely successful they could minimize their losses.

Three things changed over the past decade to alter this approach. Business now trumps technology in almost all cases. First, the venture community has grown more cautious about the rate of return in hardware and EDA tools ever since the dot-com bubble burst in 2001. It’s not possible to return to the tap anymore without a real product and a real business model.

Second, the cost of failure has gone up. It now costs $4 billion to $5 billion to build a state-of-the-art fab. Consortiums of very large companies and governments are now involved in this business. And it can cost upwards of $100 million to build a very complex SoC at the latest process node. Stalwart adherents to Moore’s Law such as Freescale, which made the leap to the next process node without hesitation until 90nm, have begun skipping nodes on certain products.

Third, chips are now so complicated that it takes too long to build everything from scratch. That means chipmakers must buy IP from third parties. Even Intel doesn’t make everything itself anymore. And all but a very few companies now use a fabless or fab-lite model for at least the digital portion of their chips, which forces them to adhere to design rules and process technology developed by the foundries.

Put these together and the result is that business issues are forcing a handoff of some of the most basic parts of semiconductor engineering—defining a unique architecture, tinkering with the layout, refining the process, and balancing all of these pieces together at tape-out. Fast yield, time to market and standardized interconnects and IP are no longer just goals. They are requirements. Some companies have handed off the building of chips entirely to a new class of value-chain producers like eSilicon, Global Unichip and Open-Silicon.

For the first 50 years of its existence, the semiconductor industry defined global as North America, Europe and Japan. Taiwan was a latecomer to the part, and TSMC’s vision of a foundry model was considered revolutionary well into the 1990s. Companies like Texas Instruments and AMD said they had no intention of letting go of their own fabs.

Fast forward through two downturns and 10 process nodes and the situation now looks much different. Software is increasingly a part of the design process, heavily automated foundries can be located anywhere in the world where tax breaks and the cost of power are lowest, and massive education programs are under way in multiple countries that see semiconductor and computer engineering as a fast way to economic health.

While many lament that the semiconductor industry is declining or not showing growth, the opposite is happening. It’s expanding significantly. In 1977, the Semiconductor Industry Association reported total semiconductor sales of $2.88 billion, with about $1.92 billion of that in the Americas and only $182 million in Asia/Pacific (not including Japan). In the first 11 months of 2009, sales were $196 billion worldwide, with $102 billion in Asia/Pacific and $33 billion in the Americas.

By any standard this represents an enormous increase in sales, but the profits are now far more dispersed around the globe. Moreover, IP for chips is being developed in places like Eastern Europe and former Soviet republics, and in the future that kind of work will accelerate in other parts of the world because the barrier to entry into this market is one of the lowest—you don’t need to build full systems—while the return on investment is one of the highest. Virage Logic, ARM and Synopsys have been snapping up these kinds of operations around the globe over the past couple years.

Most of these changes are being driven by the technology itself. There are fewer design starts for ASICs these days, but the problems being solved are far more numerous on each chip than in the past. The tradeoffs of area, power and performance have been relatively balanced over decades of development. When lithography became an issue, there was enough slack in power and performance to tide chip designers over until the next node.

At 90nm that began to change. Classical scaling ended, lithography stalled at 193nm, defect density increased as irregularities in silicon and process technology became evident. Power forced even companies like Intel to begin adding more cores onto a chip rather than continuing to turn up the clock speed, creating problems about what to do with more and more cores.

At 22/20 nm—the next node for companies that live on the edge of Moore’s Law—things get even more interesting. Both Synopsys and Mentor Graphics predict that FinFETS will start showing up on chips—3D transistor structures that will wreak havoc on parasitic extraction because of the amount of data that will now need to be analyzed and synthesized. IBM has talked about potentially reducing the functionality on chips at future nodes to be able to get chips out the door that fit into the power budget.

All major chip companies are now looking at heterogeneous cores instead of homogeneous cores and matching software and core size for a specific function. IBM and Mentor are experimenting with computational scaling to compensate for the limits of 193nm lithography. And power techniques that used to be considered exotic and extraneous are suddenly becoming necessary.

Even substrates are changing. Intel, which examined and then rejected partially depleted silicon on insulator (SOI) is looking seriously at fully depleted SOI for future nodes. And work is under way to sidestep much of this entirely with 3D stacking of chips, which have many problems such as heat dissipation and parasitic issues still not fully understood.

Perhaps even more daunting in this whole process is a complete shift in control within the design flow. The number of computations necessary at advanced nodes, coupled with business pressures and time to market issues are forcing engineers to rely on models. For many, this is like black-box technology. You put requirements in one side and the software adds a lot of the things in between.

For engineers who learned to solve problems the hard way–that is, without software models–this is perhaps the toughest change of all. RTL engineers who work at big chipmakers say there is enough work at the moment to stick with their core competencies. The problem is the amount of data they are dealing with is going up, and over the next few years it will skyrocket into the stratosphere.

Japan has been particularly accepting of tools like TLM 2.0, high-level synthesis from companies like Mentor, Forte Design Systems and Synopsys, and network-on-chip technology from companies like Arteris and Sonics. The acceptance level in Europe is lower, and it has been lower still in North America. But that is likely to change at future process nodes as business pressures take root, something that is already becoming evident with the rapid proliferation of DFM tools and automated test suites.

Tools vendors characterize these changes as a shift from design engineer to systems engineer. But there’s far more to it than that. In the future, a systems architect will have to understand how the software will behave in the system they’re designing and how all pieces of the verification can be matched to the progress in the design. The next phase of systems engineering will be concurrency in multiple pieces of the design, with real-time feedback across the flow to make a series of modifications and more modifications until tape-out.

This is already evident in the number of tools players around the fringes that are trying to solve unusual problems–companies like Atrenta, Jasper, Oasys, CoWare, and a slew of others that have made inroads and will continue to make inroads.

Taken as a whole, the confluence of a variety of factors ranging from technology to tools to business is coming to a head. Each node from here gets tougher not because one problem has to be solved, but because more and more problems have to be solved simultaneously at each successive node.

Moore’s Law will continue, but not in the form in which it was originally conceived. A FinFET is not a classic transistor, and 3D stacking moves things into a different plane. Moreover, the tools to create these new devices will continue to change, the way they are manufactured will change, and the skills necessary to create these structures will change.

Perhaps even more important, all of these changes will begin showing up over the next couple of process nodes. We are all living and working in interesting times, but whether it’s a blessing or a curse may depend on each engineer’s role, their training, their ability to accept change and possibly even where they’re located

Leave a Reply

(Note: This name will be displayed publicly)