Systems & Design
SPONSOR BLOG

Avoiding Traffic Jams In SoC Design

The real value in planning interconnects may not be obvious until after the chip is taped out.

popularity

While sitting in a traffic jam on the way to work, I realized that the sheer volume of vehicles on the road exceeds the capacity originally planned for by civil engineers, when highways first hit the drawing boards 50 or 60 years ago. It dawned on me that there is a parallel to today’s System-on-Chip design—engineers are struggling to close timing on the interconnect during the back-end place-and-route phase. The schedule delays that result from this challenge are only becoming longer, and the problem is getting worse as chips migrate to smaller process geometries.

The primary reason is that interconnect design remains an afterthought. SoCs are becoming more and more complex, and the timing closure problem continues to get worse. In order for the industry to alleviate this problem, the interconnect should be given greater consideration earlier in the design process, when SoC floor planning occurs.

The method to connect IP blocks has not changed much from a process that was designed to handle 20 or fewer IP blocks in much larger geometries. Typically, designers select the IP blocks they want during the front end of the process. In contemporary designs with more than 100 IP blocks, the interconnect design still gets ignored until much later during the back-end design phase. By that time, it has become nearly impossible to fix timing closure problems without substantial redesign. The floorplan of the chip chosen in the early stages of the process will not support an efficient and optimal interconnect that can achieve timing closure.

Today’s chips are much denser and more sophisticated, and the pathways that connect IP blocks are longer. They cross multiple clock and power domains as they transport communication signals to their destination. Additionally, the smaller size of metal lines of these paths causes resistance and capacitance (RC) delays.


Fig. 1: Designing the Network-on-Chip architecture is one of the most critical steps in achieving a design that avoids timing closure issues. This involves designing the topology as well as tuning the pipeline stages for all the paths in a NoC. A topology that does not take the floorplan into consideration will give the physical synthesis tools a hard time in both placement and timing closure. The figure above contrasts two different NoC topologies. The diagram on the left has a NoC that is designed without taking the floorplan into consideration, while the diagram on the right shows a NoC that has a floorplan-friendly topology.

Overall, placing a higher priority on interconnect design during the front-end planning stages presents one of the best opportunities to cut lengthy delays caused by the failure to close timing. If interconnect paths and SoC layouts get higher priority in the initial stages of chip floor planning, it will lead to a more intelligent chip design. This design will yield optimized signal paths, better performance, and have greater power efficiency and a reduced silicon area.

If 50 years ago a civil engineer involved in highway planning sat in today’s traffic, perhaps he would have the foresight to design highways with sufficient capacity for today’s vehicle volume. However, there are no crystal balls in transportation planning, and there are no time machines for civil engineers. Thankfully, lessons from the past can be applied to present challenges in chip design to alleviate such situations. There are a growing number of design efforts taking place now, where intelligence gathered from interconnect timing closure issues of the past can be used to greater effect initial SoC layouts and planning stages.

SoC designers are now factoring chip floor plan and interconnect considerations more prominently into the front end, to avoid the timing closure challenges that now plague the back end. This approach improves connections between IP blocks by giving them greater functionality, better location, and lower latency.

Front-end design teams using this intelligent method put back-end engineers in a better position to insert pipeline stages, typically used to solve timing closure issues. By examining different flows in the back-end, these teams can feed physical placement guidance to place-and-route tools to achieve a higher quality of results (QoR) in the interconnect.

Furthermore, this new interconnect planning methodology has additional benefits:

• Validation of topology choices;
• Easier floor plan changes;
• Greater anticipation of implementation issues;
• More intelligent floor planning.

So, when you’re commuting home from work tonight think about the traffic conditions, and how roads could be better designed to improve the traffic flow. If you apply the same approach to SoC interconnects, it is possible to see room for improvement. Now imagine how your team could cut time to market after altering its approach to interconnect design.



Leave a Reply


(Note: This name will be displayed publicly)