Having one interconnect protocol inside an SoC would be nice, but reality is much more complicated.
By Ed Sperling
Having a single bus protocol is something most SoC engineers can only dream about. Reality is often a jumble of protocols determined by the IP they use, which can slow down a design’s progress.
The problem stems largely from re-use and legacy IP. While it might be convenient to use only on an AXI standard protocol from ARM, most chips are a combination of IP tied to specific protocols that require complex interconnects, add significant time to the verification process, and often have an impact on performance.
“It’s never AMBA, Sonics or Arteris for everything,” said Mike Gianfagna, vice president of marketing for Atrenta. “There are a lot of configurations on a chip. You’ve got crossbar switching and arbitration schemes. The big question, particularly when you get into 3D stacking, is which one you should use. So you come up with half a dozen configurations and you experiment for power, performance and area.”
He said the on-chip interconnect problem is one more complexity issue that has to be ironed out. But it also has some unusual pitfalls. “An IP block is like an amoeba. It can morph in unpredictable ways. You need to be able to analyze that up front.”
How we ended up here
There have been a number of attempts over the past 15 years to avoid this kind of problem. In 1996, when the Virtual Socket Interface Alliance (VSIA) was formed, SoCs were still in their infancy even though more and more chips included some sort of processor. The hot topic at that time was whether to decouple the processor from the chip and isolate components from the interconnect. That gave rise to a handful of ARM standard buses.
“The job of the interconnect fabric is to just make it work,” said Drew Wingard, CTO of Sonics. “But what’s happening in designs is the total level of integration is going through the roof. We’re now seeing chips with more than 100 IP cores, MPEG encoders and decoders and Huffman algorithms, and you need the interconnect in a subsystem to be a good match for what you’re trying to do. The interconnect needs to be optimized for that.”
But within a single design there may be dozens of interconnects from multiple vendors, including some that were internally developed by the chipmaker.
“There will still be custom semiconductor companies doing their own interconnects,” Wingard said. “But for the bulk of the design, the number of interface standards generally is going down and most IP cores are much more latency tolerant than they used to be.”
Past, present and future
To a large extent, SoC developers are suffering from the same kind of backward-compatibility issues as software and processor vendors have been wrestling with for decades. What makes it an issue now is the level of integration and the emphasis on re-use of IP because of cost and time-to market constraints.
“If you look at the big companies, there is a long legacy of using things so they have a lot more heterogeneous stuff,” said Laurent Moll, CTO at Arteris. “Some of it they got through acquisition. If you were to create a brand new company—and there aren’t many of those these days—with a clean sheet of paper they would most likely pick the IP that is homogeneous. So you might settle on AXI as the dominant protocol, and you might even be able to achieve that today because most commercial IP is available with AXI.”
He said the first reason companies choose a homogeneous interconnect fabric is integration and verification. “It’s easier to have one person be the expert on a team than have to work with a bunch of other experts. It also takes less time to verify, fewer tools, and less time to integrate.”
Also key is performance, but that’s far less of a clear-cut decision because not all IP behaves the same way in different designs. “There are sets of protocols that don’t like to talk with each other,” Moll said. “Even the same protocols sometimes don’t work as well together as you would expect.”
Even more complexity
Just getting these various IP blocks to talk with each other is hard enough. Doing it efficiently is as much art as science. But at the center of any discussion of power there is almost always the interconnect fabric.
“Logically, the longest wires on a chip are in the interconnect,” said Sonics’ Wingard. “You have to get to all four edges of the chip. That’s why interconnect architectures are frequently restructured to decrease the time it takes to get a signal from one side to the other.”
Wide I/O and stacked die are being viewed as a way of dramatically reducing distances on a chip by running them through an interposer. To a large extent, that’s an interconnect problem. With non-uniform memory characteristics, one chip may be one or two ticks closer, which in turn improves throughput and scalability. It also allows designers to load balance data structures and traffic, Wingard said.
The downside of this approach, again, is choice—too many choices, in fact.
“The Achilles heel of 3D is too many options,” said Atrenta’s Gianfagna. “You have to reduce the number of choices quickly. So even when you come up with your bus architectures, power domain management is still a big deal.”
Leave a Reply