SoC Design In 5 Years

More derivatives, more software and more complexity will force big changes in how chips are designed—and who’s successful.


By Ed Sperling
The semiconductor industry is used to looking at changes every couple of years, based upon the progression of Moore’s Law. But look out further, over the next five years when the most advanced process node is somewhere between 14nm and 16nm, and the job of designing and manufacturing an SoC will look very different.

At the center of this change are three very significant trends:

  1. Cost increases. It will cost way too much to develop custom SoCs using current methodologies, tools and strategies at 20nm and 14nm, both from an NRE and from a time-to-market perspective.
  2. Software plus hardware. Software is becoming a huge part of the chip design, but while hardware design is getting more complex it’s still improving at a far faster rate than software development.
  3. Time to vertical markets. Differentiation will be based on the ability to quickly cobble together chips that meet the needs of different markets—either for vertical markets or specific customers—at a reasonable price point.

Derivatives market
One of the biggest shifts will be in the integration of in-house and external IP blocks, subsystems, and ultimately full die into chips—and in being able to create far more derivatives out of those various pieces more quickly.

Freescale, for one, already has embarked on a plan to figure out where it can add the most value in its chip development, which is primarily multiprocessing cores and advanced interconnects. In the future it will buy much of the standardized I/O and memory technology that it built in the past to focus its efforts on core hardware components, software development and integration. Derivatives are a key part of that strategy, and it is emerging as one of the recurring themes across all SoC development at future nodes.

“The cost of all development will go up, so you have to get more efficient by developing simple derivatives,” said Lisa Su, Freescale’s senior vice president and general manager of networking and multimedia. “Silicon design has to become more modular. A lot of the differentiation will be in the software stack.”

She said the challenge is to find translators—people who understand how to bridge the gap between hardware and software. “To build really good hardware you have to understand how the software programming model works. Software drives the hardware.”

Freescale is hardly alone in recognizing this shift into derivatives. But derivatives for a large fabless company such as Freescale have a different meaning than derivatives for a midsized company, which will rely much more heavily on outside service providers such as Open-Silicon, eSilicon and Global Unichip to build these kinds of chips.

“What we’re talking about is not breaking new ground or creating major IP,” said Naveed Sherwani, president and CEO of Open-Silicon. “To really address the cost issue there will need to be derivatives outside of companies, and a lot more more companies doing derivatives. There will be a consolidation in design services, which will be fully integrated in derivative design. Derivatives keep the complexity contained.”

A complex SoC might cost $50 million to $70 million to design at advanced nodes, but 10 or more derivative chips might only cost $5 million to $7 million each. That makes them much more affordable and allows companies to put their dollars where they can best be used for differentiation.

Abstracting up
From a tools standpoint, there needs to be a giant step to another level of abstraction. There simply is too much detail and data to process in a complex SoC that today contains up to 100 million gates, and which in a couple process nodes could contain billions of gates.

“The existing way of doing things will break down,” said Ravi Varadarajan, fellow at Atrenta. “Design closure is still being done at the place and route level with power estimation. That needs to be raised by a level of abstraction. With a higher level of abstraction you can do more with less effort. This has been a long time coming. We need to elevate this process from the gate level and RTL so we can put together a system. I was speaking with one customer that put together a complex SoC. It took them 1-1/2 to 2 years to close the chip. They were going back and forth over whose problem it was.”

He added that will become particularly important in 2.5D stacking, where integration will have to occur at the subsystem level. But whether a high-speed bus or a low-speed bus is used to connect those subsystems, all of that should be transparent to the designer. Similarly, exploration at the SoC level should allow changes in memories that are used to connect to multicore processors that are reflected in other parts of the chip architecture.

There is almost universal agreement about the need to raise the level of abstraction. The question is when, by whom, and whether that abstraction level will include enough details to be useful—particularly when making architectural tradeoffs about such things as different memory configurations, power measurements that are useful, and how that will work with an increasing amount of third-party IP. Block re-use is expected to increase to more than 70% of an SoC design in 2015.

“One of the main technology challenges is block design and re-use,” said Frank Schirrmeister, director of product marketing for system-level solutions at Synopsys. “The main issue there is integration. From the technology side we need to figure out connectivity. We can take care of the registers at all locations, and then you’ve got companies like Arteris, Sonics and ARM that are addressing how you put it all together. But you’ve got lots of blocks and assembly of blocks.”

Those blocks increasingly are being assembled into full subsystems, too, which are beginning to show up on the market complete with software drivers.

Software first, software last, software everywhere
Software is a complicating factor, though, rather than a simple solution. While there has been lots of discussion around the challenge of getting hardware engineers to talk with software engineers, there has been far less discussion about just what each piece of software code should do in designs.

One of the reasons is that there is no simple answer. It all depends—on the market, the device, and what is the motivation of the companies developing the chips. Are they looking to maximize performance, minimize power, or simply build lots of derivative chips that can function using the same basic hardware.

“There are different ecosystems forming,” said Jack Browne, senior vice president of sales and marketing at Sonics. “Before you might partner with Wind River and Monte Vista. Now you partner with the application developers.”

What this means for chip designers is that it’s no longer a closed world. In the past most chips were written to specs that focused on performance, process technology, power budgets or some other well-defined set of rules that were largely hardware-specific. In the future those boundaries will be far less well defined and they will need to be modified quickly based upon individual markets. Software is one way of doing that, providing the hardware can take advantage of the software.

There has been a fair amount of discussion already about devices being software-first or hardware-first, with some engineers and companies taking a middle position saying there needs to be a better bridge. In the future the decision of how software is written may include everything from popular applications, use models, vertical and regional markets—as well as all the old rules.

“In the mobile world, where you have Google TV and Android running on high-performance platforms like MIPS and ARM, that’s where application-specificity comes in,” said Synopsys’ Schirrmeister. “In the automotive world there’s AUTOSAR (automotive open system architecture), which separates the software from the hardware so you can control the whole stack from the hardware to the software. The challenge is which version you’re on. Then you’ve got the semiconductor manufacturers and IP providers doing things like Linaro on the Linux side to make sure the right platforms are supported. “

Business effects
While the technology and process changes are significant, they are at least better understood than how that technology will intersect with business. Chips will still be built. More components and services will be outsourced. The price will be more affordable to some, less affordable to others. And software will have to be written more quickly and verified more easily than it is today.

Where things get fuzzy, though, involves who is best positioned to take advantage of these changes and who will reap the lion’s share of the rewards for getting the formula right.

Sonics’ Browne says the market has split into small chipmakers that want to push the leading edge of performance and/or low power, those that are looking for a unique volume play in the consumer electronics/smart phone market, and those looking to break into new markets such as portable medical devices once the price point drops low enough.

“The question is how quickly you can do derivatives, or whether you can compete with the old superchip approach,” noted Browne.

Freescale’s Su has a similar take on the market. She said the goal is to start with a platform that can bridge many end markets, then customize it for specific uses. “We’ve been working on ARM-based products that can be used in auto infotainment as well as industrial and medical markets. You start with a general processor as a test chip and then branch off from there.”

But who wins from those designs, and who ultimately loses is unknown at this point, and while design and development will be built on years of evolutionary changes, the business ramifications of those changes are far less obvious.

Leave a Reply

(Note: This name will be displayed publicly)