Reducing Bottlenecks

Interconnect becomes the critical path in complex designs, particularly those requiring maximum performance or low power.


By Ann Steffora Mutschler
For the first time ever, China recently earned fastest supercomputer bragging rights with its Tianhe-1A supercomputer, which can perform 2.57 quadrillion computing operations per second. The machine has been successfully used to survey mines, forecast weather and design high-end machinery.

While it has caused concern, it is important to note that the Tianhe-1A uses the latest off-the-shelf Intel and Nvidia processors. What set this supercomputer apart is the effort that went into streamlining the interconnects, realizing that was the bottleneck.

“Engineers don’t always do that with SoCs. They don’t think about the critical path. The interconnect is now the critical path,” said Kurt Shuler, director of marketing at Arteris.

With so much time today spent on individual bits of IP, developers don’t always think about how it’s all going to connect into a system. The sheer number of IP blocks being put on SoCs today demand that the interconnect be addressed differently than traditional techniques. The pain of trying to connect hundreds of blocks of IP and then make derivatives of the chip has become so intense that engineering teams are adopting new approaches to buses and crossbars in order to reduce the number of wires in an SoC.

“Mathematically, once you get to a certain point when you have so many blocks of IP, it becomes basically impossible to tie it together with wires because even though the transistors are shrinking to Moore’s Law, the wires don’t. The wires are the same size today that they were on a 286,” he pointed out.

Unfortunately, there have been a limited number of ways to address this. Metal layers can be added, but that just makes the profile of the SoC look like, “a wedding cake from hell,” with as many as nine metal layers, Shuler said. With each mask costing millions of dollars per metal layer, the problem is at a critical pain point.

Another way to address interconnect challenges is with early floor planning. Given the smaller diameter wires on the lower-level metal layers, initial floor planning has to be done early to map out the location of the CPU, memory, video, and other components on a block diagram. As more and more wires are added to the chip, they have to be spread further and further apart. This doesn’t just include the wires because there is some logic that comes into play. When a bus or crossbar is used, additional wires and logic must be added for crossbars.

As such, many semiconductor designers are turning to network-on-chip to solve these issues. The concept of network-on-chip originated with the realization that crossbars could no longer fit onto the chip between different IP blocks and did not do anything to address the growing number of wires.

Texas Instruments has been using network-on-chip technology from independent suppliers since the early part of the decade in its wireless business unit with a great deal of success, according to TI SoC architect James Aldis.

“We have an approach to using it that integrates system-level design, meaning transaction-level modeling and optimization of area and latency and throughput of the network and chip itself. That permits us to have networks-on-chip ready at the very earliest stages of system-on-chip integration. We are one of the most experienced users of really big networks-on-chip in the world of system-on-chip design and we are very, very positive about it,” he explained.

“Where the system-on-chip interface in the past has been a bottleneck for system-on-a-chip time-to-market has not been the case for us for a long time now and we are very pleased with that. I’m not trying to say there aren’t some significant challenges in the future, and certainly with the shrinking of process technologies things are going to get quite exciting in the next few years,” Aldis added.

With interconnects today the pain is enough that people know that if they drag their feet any longer, kluge things together, or add another crossbar, the design is going to break. That will affect the IP on the chip or make the chip worse. It is no longer a “nice to have,” Shuler said.

Leave a Reply

(Note: This name will be displayed publicly)