Why on-chip networks are so critical to IP integration.
For high-volume system-on-chip (SoC) applications—artificial intelligence (AI), automotive, mobility, solid state drives and more—effective interconnect technology can generate hundreds of millions of dollars in revenue due to smaller chip area, better functionality and faster delivery of SoC platforms. State-of-the-art interconnect technology also allows chip designers to create SoC derivatives more efficiently, accelerating their ability to respond to changing market needs.
SoCs designed into advanced driver assistance systems (ADASs) are a great example. They are highly complex chips in which processing and memory subsystems mandate interconnect IPs capable of facilitating higher data bandwidth, lower latency and efficient power utilization. A successful SoC design can help generate millions of dollars for the customer; it can also create enterprise value by saving thousands of lives and improving productivity and transportation asset utilization.
Yet, at most technology events and conferences about automated driving, artificial intelligence and 5G mobility, the predominant focus is on processors, memories and vision subsystems. There is rarely a mention of the chip ingredient that allows all of these parts to communicate: The interconnect IP.
Why don’t interconnect IPs receive the respect they deserve? It could be that the majority of today’s senior managers and technologists were taught about processors and memory to the exclusion of SoC assembly technology in engineering schools. Moreover, years of effective marketing may have created a consciousness around processor technology without a corresponding amount of information being generated about interconnect IPs and SoC assembly in general.
SoC topologies—whether heterogeneous, mesh, or ring—are implemented through the interconnect. These SoC devices may contain 10, 15 or even 20 interconnect IP modules that help define the SoC architecture.
Figure 1. If the processor is the brain of the SoC, the interconnect is the nervous system.
Imagine if medical schools taught only about the brain, heart and muscular system, but didn’t spend much time on the circulatory or nervous systems. Graduating doctors would not be capable of understanding the entire body. We can apply this analogy to the human body and SoC technology as illustrated in Figure 1. SoC technology has evolved in complexity from simple processor connections to including memory to a highly complex interconnect IP network that supports several initiators for many targeted architectures.
In fact, SoCs have evolved from single processors connected to a few IP blocks to multimode networks that connect tens or hundreds of IP blocks. Those who missed the evolution of interconnects have fallen behind in delivering SoCs for newer applications such as 5G mobility, AI, ADASs and automated driving.
The unsung hero of the SoC revolution
Interconnect IP is the unsung technology hero of SoC implementation. It provides the majority of the on-chip IP block connections and communication services and enables nearly all of the data going through the SoCs; it also contains all of the long-wire connections. Interconnect IP handles various protocol conversions in the SoC as well as data and control transport. It also contains all of the higher-level services such as cache coherency, quality of service (QoS), power management and interconnect security and functional safety. SoC features add on functionality such as interconnect physical optimization and observability (Figure 2). That is why interconnect IP impacts SoC performance, power utilization, silicon area, functional safety, security and delivery schedule productivity.
Figure 2. SoC complexity has grown, and so has the importance of interconnect IPs.
New 16nm and 7nm monster chips cannot exist without advanced interconnect IPs. This is because, first and foremost, there are islands of cache coherency outside of the main subsystem. Additionally, there are large deep-learning sections that require regular mesh networks and high-speed memory interfaces to maximize bandwidth to the memory subsystem, which may contain four or more memory channels. Advanced power management techniques are necessary to support multiple power domains and multiple levels of on-chip cache memory to optimize memory bandwidth and minimize latency due to off-chip memory access.
Because of the physical effects of sub 16nm processes, design teams can now create valid logical SoC architectures that are extremely difficult to route and to close the timing. Therefore, advanced SoCs require interconnect IPs to not only be verified functionally and formally but also from a timing perspective. IP technology has become more complex and more valuable over the years. So, next time you go to a conference and see processors, memories and I/Os on presentation slides, ask where the interconnect IPs are. Your SoC implementation team will thank you for it because today’s reality is that complex SoCs are interconnected IP networks to connect licensed or internally-developed IP blocks. Remember that without on-chip communications through interconnect IPs, there can be no SoC devices.
Leave a Reply