SoCs Go Mainstream

The laws of physics are changing design away from the bleeding edge; opportunities and risks abound.

popularity

By Ed Sperling
The monolithic ASIC, which has been the bread-and-butter of chipmakers for decades, is giving way to systems on a chip among mainstream chipmakers and at mainstream process nodes.

This shift has been overhyped, overpromised and slow to materialize. While SoCs have been common for years in mobile electronics and for high-performance platforms such as gaming consoles, they have always been more expensive to design and manufacture. But at 40nm and beyond—and increasingly even at 65nm and 90nm—physics, an increasing amount of software and the inclusion of more third-party IP are forcing changes in best practices for designing chips. And as the industry heads into 2.5D stacking over the next couple years, subsystems that can be part of systems in package will add even great emphasis, as well as some new wrinkles, to the shift.

“It’s happening now and it will continue to happen,” said Tom Lantsch, executive vice president of corporate development at ARM. “We’re seeing application processors that are heterogeneous multicore on the same chip with graphics engines and video engines and they’re now running Symbian instructions. A lot of this shift is based on power. There’s a realization that you can do things other ways more efficiently.”

So what exactly is the difference between an SoC and an ASIC? The common definition is that an SoC includes one or more processors plus software and peripherals, making it a complete system rather than a ASIC, which is suited for a very specific task.

“The ASIC customer used to be the system house,” said Hans Bouwmeester, director of IP at Open-Silicon. “But now the system houses and fabless semiconductor companies are focusing on horizon tasks. It’s not divided by front end and back end anymore. It’s horizontal and vertical, which is re-use or availability of IP and competence. If you look at ARM’s chips, they’re applicable across multiple domains and customers are willing to outsource that development to them.”

This shift hasn’t been lost on Open-Silicon or eSilicon, both of which are shifting from an ASIC to an SoC approach. And both say the SoC world will explode once the once the industry begins adopting 2.5D stacking over the next couple years—a move that also may include more emphasis on FPGA platforms as part of the 2.5D stack.

Partition issues
At least part of what an SoC brings to the design table is flexibility. There is an ability to try different things, and at each new process node more room to experiment. But silicon is never free, even if it is available. Shrinking feature sizes creates its own set of problems at each new process node.

The typical method of deal with these problems is a “divide and conquer” approach. If there are 500 blocks, those blocks can be aggregated according to function, shared resources, or some other scheme. But in an SoC, finding the right line on which to base that partitioning is more difficult. Even worse, it can change, depending upon which market a chip will serve.

“If you do a flat design you always get the best quality,” said Sudhaker Jilla, product marketing director at Mentor Graphics. “But as the chip grows the runtime becomes unbearable. It can go from hours to more than a week. The alternative is to use a hierarchical approach, but then you have a problem of performance. You want the turnaround time of a hierarchical flow, but the quality of a flat one.”

The reality is both are needed for SoCs, but that also means a significant learning curve for the design teams. They need to learn new tools, figure out how to partition their designs—whether it’s by blocks, geography, or IP.

“The key is that companies need to figure out how to divide and conquer,” said Jilla. “Will it be dual-core or quad-core? Or will it be multiple different cores?”

More tools, more IP
For EDA and IP vendors, this is only good news. Selling to the biggest chipmakers has always been lucrative, but continuing to sell to those same customers while also adding incremental business is a big win. FPGA tools have been sufficient, for example, to do basic layout and verification, but put that same FPGA into an SoC or a stacked die configuration, add software and third-party IP, and then try to integrate it all together and the complexity easily outpaces what the typical FPGA tool can do.

“The biggest trend is that people are spending 35% to 40% of their effort writing software,” said John Koeter, vice president of marketing for Synopsys’ solutions group. “When you get down to 28nm or 20nm, companies are spending more than 50% of the time to market developing software. If you look at an SoC today, it’s usually two to four host CPUs, two to four GPUs, and it’s increasingly heterogeneous.”

He said that opens up huge opportunities for linking software to hardware, and virtualizing the hardware and software. It also opens up opportunities for IP, tools to help integrate that IP, exploratory tools that can show the tradeoffs at the architectural stage, and a suite of verification tools and verification IP.

“Just from a verification standpoint you’ve got to tackle this at several levels,” said Pete Heller, senior product line manager at Cadence. “You’ve got to look at it from the subsystem and block level for functional reasons. And you’ve got to look at the full SoC and pump real data through the system so you can get as much real-life validation as you can. Then there’s a third level, which is to put it into the hands of 100,000 people and let them be the guinea pigs after you’ve already worked out all the bugs you can.”

What is a subsystem?
That leads to the next phase of this whole development scheme—fully integrated and tested subsystems, which are expected to begin hitting the market over the next year in preparation of more SoCs and 2.5D stacked die.

“If you look back 10 years when Gartner was tracking design starts, in 2000 there were about 20,000 chip designs a year,” said Drew Wingard, CTO at Sonics. “Now we’re seeing more SoCs because you have processors sprinkled around the chip that may or may not even show up in the bill of materials and that you may or may not have access to.”

Increasingly, those pieces will be combined into fully integrated systems that include IP, possibly processors, and perhaps even shared resources such as memory with standardized interfaces. That approach will become particularly useful when chips can be stacked, either in 2.5D or 3D, and it will completely render the number of design starts meaningless. There will be more design starts, but the final outcome may be subsystems rather than chips—or chips that are part of a stack rather than the fully integrated stack itself.

“A general-purpose processor may not be the most efficient way to accomplish a task,” said Wingard. “This has led to a huge discussion around subsystems. Not everyone believes each function needs a processor. But how independent is a subsystem going to be? You can quickly get into a situation where you have enough performance most of the time, but there may be specific and critical sequences where you don’t have enough.”

There has been a lot of talk about subsystems across the industry lately, and companies are positioning themselves to take advantage of this shift. But the challenges of making this all work are huge.

“This is similar to the challenge embedded companies have faced for a long time,” said Simon Butler, CEO of Methodics. “It’s one thing if you’re dealing with a homogeneous environment where the tools talk together. But when you have to bring all these different pieces together and make sure all the parts are aligned, it’s going to be very difficult.”

Past, present and future
Still, the road to SoCs has been set and it’s gaining momentum. That became very obvious at the Consumer Electronics Show over the past couple of years.

“What’s changed is the user experience is now a combination of hardware and software,” said Mike Gianfagna, vice president of marketing at Atrenta. “We’re seeing the consumerization of electronics. The idea isn’t new. Joe Costello was talking about this a decade ago. But it’s finally happening. The semiconductor content is enabling the user experience.”

That will only increase as future designs allow more choices of IP, software, processors and ultimately subsystems on a chip—and more intelligent tradeoffs to make it all work faster and cheaper while using less energy.



Leave a Reply


(Note: This name will be displayed publicly)