Systems & Design
SPONSOR BLOG

Top 5 Reasons The SoC Interconnect Matters

If you can’t optimize your interconnect, you’ve neglected this area too long.

popularity

The on-chip interconnect is the one area of SoC design that still does not receive the priority that it deserves. It’s like Rodney Dangerfield: It gets no respect. However, that is changing because of rising chip complexity, smaller process dimensions, and acknowledgement of the fact that in a world where design teams commercially license most of the chip’s critical semiconductor IP (like CPUs, GPUs, and memory controllers), the control point for product differentiation is the on-chip interconnect.

A dirty secret: Most chips within any particular market use very similar CPU cores, graphics accelerators and memory controllers. It’s the design team’s custom architecture that differentiates these chips from each other, and the “knobs and dials” for implementing that SoC architecture reside in the interconnect fabric implementation.

To steal from Scott McNealy, “The interconnect is the SoC.”

For those teams who haven’t evaluated the SoC interconnect in a while, and even for some of those who already have, here are the top five reasons you should conduct a careful analysis of what you’re using presently. If it is not possible to optimize your interconnect in these five areas, then you have probably neglected this vital area for too long. Advanced interconnect technology is sophisticated and very important for modern chip designs.

Following are five reasons why interconnect matters:

1. Critical to CPU and GPU performance. The interconnect is the connection between processors and coherence controllers, last-level system caches, and DRAM memories. Increases in CPU or GPU performance are only useful with a corresponding increase in interconnect bandwidth and reduction in latency. There are many detailed considerations required for a low-latency interconnect in a high-performance CPU-based SoC.

2. Last IP to be configured. A thoughtfully designed interconnect avoids costly timing closure problems and design rule violations that cause delays in the routing and post-route optimization stages of chip designs. The impact of schedule delay increases as a project approaches its conclusion, and the last step is the most significant in avoiding delay. Even though interconnect is a front-end RTL IP, it is the last one to be configured because it depends on the configuration of all others.

3. Longest physical connections between cells. It provides vital connectivity and arbitration between all of the IPs on a chip, even ones that are physically far apart. As a result, logic signals fan out over very long distances, requiring many buffers, inverters, and metal layers. For many links, wire length capacitance delay is much larger than gate switching delay. This makes it necessary to carefully pipeline the logic and constrain the placement of pipeline stage flops within the floorplan. Failure to use the right interconnect to address long paths will result in schedule delays due to failed timing closure.

4. Impact on wire routing congestion. Because it has long wires, and aggregates wide, high bandwidth communication paths to centralized locations, the interconnect tends to have areas of high place and route congestion. Packetization, serialization, and careful design of interconnect network topology minimize congestion while meeting the bandwidth and latency requirements of a chip design.

5. Power-disconnect logic conserves energy. Powering down processing units and interfaces that are not needed in different use cases is critical to conserving energy in batteries and also managing overall power consumption and heat. The easiest and safest way to do so is in the interconnect, which provides a single universal disconnect protocol to a system-level power manager and also ensures the correct completion of pending transactions before power-down. That’s why NoC technology has been successfully implemented in ultra-low power IoT chips by design teams from Texas Instruments and Samsung.

To the uninitiated, the interconnect is a bunch of wires. However, in modern chips it is comprised of millions of gates of standard cells. The logic is used for arbitration, buffering, clock and power domain crossing, muxing, and scheduling. The interconnect IP products on the market today are highly sophisticated, having been built with the work of hundreds of engineer-years with learning from hundreds of the world’s most performance-, power- and cost-sensitive SoC projects. Today’s interconnect IP is highly optimized for total system power reduction and ease of integration. The wise project leader will take advantage of third-party interconnect IP, such as that offered by Arteris.

2015-10-20_interconnect-IP-block-diagram



Leave a Reply


(Note: This name will be displayed publicly)