What used to be a straightforward progression from board to chipset to chip is no longer so obvious.
For as long as most semiconductor engineers can remember, chips with discrete functions started out on a printed circuit board, progressed into chip sets when it made sense and eventually were integrated onto the same die.
The primary motivations behind this trend were performance and cost—shorter distance, fewer mask layers, less silicon. But this equation has been changing over the past few process generations. The shortest distance between two points isn’t necessarily on the same chip anymore, and the most efficient way to get there certainly isn’t necessarily over a super-thin wire. On top of that, when all costs are taken into consideration, the cheapest way to get a chip to market may not be by using less silicon. The number of mask layers increases significantly at the most advanced nodes with double patterning at 16/14nm, and triple patterning” expected to make its debut for at least some layers at 10nm.
All of this raises the level of uncertainty about how chips and functions will be partitioned in future designs, both on and off die, and that uncertainty is being exacerbated by the continually delayed EUV lithography and looming questions about just how ready and expensive “2.5D and 3D really are.
Nevertheless, Mark Bohr, a senior fellow and director of process architecture and integration at Intel, said both stacking approaches will become increasingly critical for future scaling. While Intel has created a low-cost alternative to an interposer, this is the company’s strongest statement of direction involving stacked die.
“2.5D and 3D integration are not a replacement for ,” said Bohr. “This is not about cost reduction. The cost will increase, which is one reason why it is not mainstream yet.”
He noted that 3D is more amenable to low power due to thermal considerations and the power delivery network, while 2.5D will be more geared toward high performance.
But what exactly does that mean for the partitioning of functionality? How will it affect tools and methodologies? And what does this mean for future chip designs? The answers to those questions, and many others, are far from clear at this point.
Board vs. chip
The semiconductor industry is working on at least some of the more obvious issues, though. Memory, for instance, has become one of the hubs inside of SoC. Yet there are so many functions that need to tap into memory that it has created issues across the design spectrum, from signal integrity to layout of IP blocks to overall performance and power.
Moving at least some of that memory off-chip might have seemed like heresy in design methodology several process nodes ago. Not anymore. The idea is gaining lots of attention these days, as design teams figure out what type of memory to use for what functions, where to put it, how fast it can be accessed and how much energy that will require.
“If you partition the system, you have to think about whether you want DDR3 on the board or whether you want to suck it into the package,” said Dave Wiens, business development manager of the Systems Design Division at Mentor Graphics. “This may sound obvious enough, but if you make the package bigger you can have heat problems. So you do highly accelerated lifecycle testing, which is how you know what will happen. You model the thermal and the electrical performance. And you do all of this before you spend money building a prototype. You may come up with four or five different versions of power on chip, on board or on a Vcc net.”
With multiple voltage islands and a number of other techniques, as well as faster connectivity such as Wide I/O-2, it’s possible to time functions so they tap various memories in an orderly fashion no matter where they are located. As more functions and voltage islands are added into those designs, though, it becomes much more difficult to design, verify and achieve timing closure. “We used to have a single power and ground. Now we have more layers of power distribution levels,” said Wiens.
The Hybrid Memory Cube is one attempt at a solution to this problem, where multiple DRAM chips are stacked on a logic chip and connected internally with through-silicon vias (TSVs). It can be included in an ASIC, inside a package, or it can be connected externally using an interposer or some other high-speed interconnect to improve throughput and reduce RC delay. High-bandwidth memory also has been under development for 2.5D packages as an alternative approach.
Whether the memory sits on a board, in a package, or even on the same chip depends on the needs for a particular design, including form factor, cost constraints and performance needs. But what’s important to note is that these new advanced memory architectures are being designed in a way that can be separate from the SoC rather than integrated onto a piece of silicon to allow for more granular partitioning of functions and components.
In many respects, 2.5D is a PCB-type of approach within a package, but the interconnects between components are faster and the distances involved are shorter—sometimes even shorter than the distances across a single die. “It looks a lot like board-level design, but you need system-level design approaches to make it work,” said Mike Gianfagna, vice president of marketing at eSilicon.
That sentiment is echoed by Vasan Karighattam, senior director of architecture for SoC and SSW engineering at Open-Silicon: “The applications require you to think at the system level before you go down to the ASIC level.”
Analog vs. digital
What functionality remains on a chip needs to be partitioned in multiple ways, as well.
Of particular concern on the design side is the difficulty in scaling analog. While analog design teams have been able to scale circuits well beyond where most experts predicted, it’s becoming much tougher in the age of multi-patterning and shrinking feature sizes below 28nm.
It hasn’t been easy on the digital side, either, particularly with the migration from 28nm to 16/14nm finFETs using double patterning. There is simply so much data to contend with from a massive number of electrical interactions, the physical layout, and the equipment that is needed to measure, manufacture and test these complex SoCs that achieving sufficient yield is a nightmare. Even Intel, which has been the industry pacesetter in moving from one node to the next, had delays in getting 14nm out the door.
“There have been a lot of challenges on the bring-up of finFETs,” said John Kibarian, president and CEO of PDF Solutions. “The big challenge is being able to combine many types of information and apply physics to that. It’s fault-detection data plus product yield plus test data.”
At least in digital design there are restrictive design rules to limit shapes, and tools can manage and account for layout irregularities and process variability once they are understood well enough. It’s a completely different story with analog circuits, which don’t benefit from shrinking.
“Dramatic increases in noise, process variation and interconnect parasitics mean that you’re fighting the process, making the design process more complicated and difficult without really getting real benefits,” said one expert source. “For the most part, big digital is the reason to go down to the smallest feature sizes, so the analog we see being done at those nodes are typically IP blocks (PLLs, I/O interfaces, etc.) meant to be integrated into large digital parts. If the choice of process node is dictated by analog functionality, we see most companies choose the analog specialty processes from foundries like X-Fab, Tower/Jazz, Dongbu and others at 130, 180 or even 350nm.”
This becomes much worse at the most advanced nodes, where oxides and wires are thinner and noise is even higher.
“We definitely see 14nm finFET designs having issues with analog and mixed signal circuits,” said Charles Janac, president and CEO of Arteris. “Analog functions are too hard to shrink. We’re still seeing a number of designs that are 14nm, and people are planning 10nm, but those are for other functions such as processors, GPUs and DSPs. For sensors, audio and modems, it’s becoming much harder to scale. You may get some analog functionality in there, but it will be more cost-effective to do a 2.5D and 3D package that separates out the analog and RF, memory and the digital circuits into separate chips.”
It remains to be seen just how the market moves forward at the most advanced nodes. Back when 130nm was the leading edge of process technology, large analog vendors predicted the end of mixed-signal integration and instead would move to separate chips for analog and digital. While analog automation tools still lag tools for digital circuits, and many analog designers resist using EDA for anything in their world, the analog design process has become more automated and integration has continued. But just how far it can be extended beyond 28nm for a reasonable return on investment in both time and manpower is unknown.
“We very well could see partition choices based upon process choices,” said Drew Wingard, chief technology officer at Sonics. “When we began to build SoCs we were forced to master skills we hadn’t mastered before, such as wireless. Right now, sensors are on a separate die. But you can think about a 2.5D and 3D package to put them together.”
Wingard noted this will be important for the sake of flexibility in future designs, as well, with various types of memory, digital circuits, mixed signal circuits, and sensors being combined into a single device. Rather than trying to integrate analog functionality for a design that may change, it might be far cheaper in the long term to build it all separately and then connect it together.
“We all expect the IoT to be huge, but we don’t yet know what the killer features will be,” he said. “We know the end devices will be battery- and form-factor sensitive, but you still need to decide which collection of peripherals you’re going to support. That’s not just about adding in cards. So it becomes imperative to look at more system-in-package approaches. You may decide you want communication to be in more trailing nodes.”
Power, signal integrity, routing
How functionality gets partitioned also has an impact on a variety of other parts of a design—not to mention the design teams that work on that design.
“It’s pretty simple to look at a design and say a spiral inductor for RF is taking up a lot of real estate and should be moved to the package,” said Brad Griffin, product marketing director for the Sigrity product line at Cadence. “We can simulate that even though the database crosses over from the chip to the package. But there’s another side of this where there are different types of data in the package. You can marry them together at a high level, but so far no one can bring them all together for full-system verification where you’re dealing with signal integrity and power integrity.”
This is reflected in the expertise of design teams, as well. The only way that has proven successful to design and verify complex SoCs has been using a divide-and-conquer approach, which is reflected in IP blocks, subsystems and the EDA tools that are available today. How those tools apply into a stacked die or complex package world where partitions are more complex, and in some cases user and power dependent, requires a much more integrated approach across areas that in the past never had to be integrated. And it requires more cross-domain knowledge within the design flow between various groups.
“The challenge is there are so many different ways to connect everything together and there is much more exchange of data,” said Ron Lowman, IoT strategic marketing manager at Synopsys. “There is always the goal of interoperability, but there also are different ways to do that. For appliance customers, it’s largely about WiFi. For wearables, it’s largely Bluetooth Low Energy.”
There also is enough churn in the market, until things get sorted out, that chipmakers will try to re-use whatever is possible until they are sure where to place their investments.
“Every time there is more innovation, you will see more roll-back,” said Open-Silicon’s Karighattam. “Smaller companies are definitely innovating here and adding features, and you’re seeing more growth in areas such as vision processes and what’s achievable, but all of this will take time.”
Until things are finally sorted out, partitioning will continue to be in almost constant flux as companies figure out what to move where, what to develop from scratch, what to re-use, and how to target new and existing markets most effectively. Uncertainty and the unknown require increasingly complex responses, and tools and methodologies will have to be tweaked to keep up.
Leave a Reply