The Trouble With On-Chip Interfaces

Multicore systems on chip will require every device to also have a network on chip; standards battles are just beginning.


By Ed Sperling

The trouble with standards is that many of them arise out of need rather than through careful planning, and often unilaterally.

The typical scenario in chip design is that a company has an issue to solve, so it comes up with a solution. When it gets what it believes is critical mass behind the standard, the company that developed the solution opens it up to the rest of the industry, hoping that it will either attract new customers or get enough of a jump on the market to create incremental business.

This has been repeated with languages—hardware description and software programming, to name a couple—as well as intellectual property and just about every other tool used in chip design, development and verification. And when there is more than one approach, those competing and often incompatible technologies are typically integrated so that everything can work together and the industry can move on to the next challenge.

That appears to be happening now in the on-chip interface world, where ARM’s AMBA, IBM’s CoreConnect and OCP-IP are all battling for attention. Both ARM and CoreConnect are entrenched in their individual markets, but with multicore chips becoming common the separate approach presents challenges to engineers.

“All of this technology is good,” said Sudeep Pasricha, who wrote a book called “On-Chip Communication Architectures: System on Chip Interconnect,”, and assistant professor in Colorado State University’s department of electrical and computer engineering. “The bad is there are a lot of issues making it all work together. If you integrate an ARM core with a CoreConnect bus standard, there’s a mismatch of protocols. You can fix it. You can develop components that work with the different standards. But it’s expensive and it takes time.”

Multicore Multiplexing

The problem gets exponentially worse in multicore chips, where every device is basically a network on chip running under a system on chip. Cores need to communicate across that network, but frequently they are heterogeneous collections of IP. That means multiple vendors building technology on a single substrate using different protocols and interfaces. The opportunity for confusion increases with every core.

In fact, Pasricha said IBM is in the process of developing its own NoC for the Cell processor that uses packet switching for chips with 50 to 100 cores. The interface is being custom-developed by IBM, he said.

OCP-IP, meanwhile, is looking to represent the middle ground in all of this, raising up the level of abstraction by adding connections in much the same way that middleware does for disparate application software. “Our approach was to develop a socket to deal with all kinds of IP, whether it’s a graphics processor or a media processor,” said Ian Mackintosh, chairman of OCP-IP. “AMBA is very well accepted around the processor subsystem, but OCP (Open Core Protocol) will handle the broader system better. We also have worked closely with OSCI (the Open SystemC Initiative) so we are TLM 2.0 compatible. Our TL3 is compatible with TLM 2.0.”

OCP-IP currently is benchmarking NoCs to ensure there is no performance degradation when various interfaces are used. “This is becoming critical because of the diverse sets of IP that are being used,” said Mackintosh. “We’re not dealing with just one processor anymore.”

And just to make matters even more confusing, the industry isn’t dealing with a single NoC approach, either. In addition to IBM’s new NoC and Texas Instruments’ OMAP platform, there are four other commercial NoC players: Sonics, Silistix (United States), Arteris (France), and Inventure (Japan).

The bottom line: Even as we resolve some of the confusion, more is being added.

Leave a Reply

(Note: This name will be displayed publicly)