The central nervous system of SoCs is expanding to help manage things like QoS and performance analysis.
The primary objective of any network-on-chip (NoC) interconnect is to move data around a chip as efficiently as possible with as little impact as possible on design closure while meeting or exceeding key design metrics (PPA, etc.). These networks have become the central nervous system of SoCs and are starting to play a larger role in system-level services like quality of service (QoS), debug, performance analysis, safety and security because these on-chip interconnects transport and “see” most if not all of the of the on-chip dataflow. Think of the NoC as the SoC’s “all seeing eye” and you’ll have a better understanding of what is technically possible. Here’s just a few:
The highly heterogenous SoCs common today must often operate under very tight constraints. Consider the very different demand profiles on a mobile phone, for a video display versus an imaging system, a CPU or background download. Or demand in a radio area network from many streams of that same wide diversity of traffic delivered or transmitted at 5G speeds. In these examples, system architects must manage quality of service (QoS) very carefully through sophisticated algorithms that ensure required bandwidth and latency between senders and receivers is met, and which may need to be reconfigurable on the-fly. These might need to be configured through software-changes to control and status registers for example. For systems with a unique external memory, typically LPDDR or DDR, QoS is critical to distribute the memory bandwidth to all requesters in a way that will satisfy all the application requirements.
There’s an obvious tradeoff in these choices between more hardware control versus more software control. You might choose to configure in-hardware capabilities to preferentially route higher priority traffic versus low priority traffic through high priority links. Or to regulate bandwidth or rate beyond limits you define, from or to certain functions. Or you might choose to use mechanisms like credit-based flow control. These choices can be very application sensitive and depend on a lot of flexibility in configuring the NoC.
Since the NoC sees all the data travelling around the SoC, it’s the logical place to center diagnostics and performance measurement. As the NoC generator is building the whole network structure, you don’t want to hack in your own probes. The right way to do this is to have the generator insert the probes for you, selected from a range of possible types. Probes can examine data in-transit as well as gather metrics like bandwidth and latency. They can filter on packet or transaction attributes to generate performance histograms, event statistics and traces. If you’re already familiar with the trace, sampling and performance counter capabilities in CPUs then you are already familiar with the observabilities and data capture capabilities built into NoC interconnects.
These capabilities are obviously beneficial during design, but they can also be implemented in the hardware, for runtime monitoring. In this case probes can be connected to standard debug infrastructures like AMBA CoreSight or dumped to internal memories.
Safety is another important area where the NoC must not only provide for and demonstrate suitable compliance for its own logic with the ISO 26262 standard. It is also a central resource for mediating safety in the system as a whole. First, for practical reasons, few SoCs are built with all components measuring up to a common ASIL safety level. A NoC should be configurable to add and check parity or ECC where needed, to configure and check for communication timeouts, and even for configuring some of the NoC units to work with a duplicate in lock-step. Also, if needed, the NoC should be able to detect and even isolate misbehaving hardware functions in preparation for a reset/reboot.
This last point is especially important for chips which must conform to the ASIL D level, again given a mix of other IP in the system which do not necessarily rise to that level. The concept of a safety island in such chips is becoming very popular. The island is certified to ASIL D and monitors the behavior of the full design through the NoC, requiring NoC support for detection, isolation and other functions as needed.
Security is another hot system area where the NoC can add further defense. As a network, you might expect that a NoC should be able to provide firewalls. The best do! State-of-the-art NoCs can filter traffic against a designer-defined set of tests and respond to the system according to a designer-defined set of rules. For example, these firewalls might be installed between a secure zone and a non-secure zone. The firewalls would be programmed to block or posion data which violates rules for crossing between unsecure and secure zones, and then respond to the system level that there has been an issue. Of course, to reduce the effectiveness of “fuzzing” the architect may choose not to have the firewall respond with the problem and the actions it took, reducing the ability of a hacker to discern that the system knows it is being hacked. Additionally, you’ll want a strong root of trust or similar mechanism at the center of your security strategy. NoC interconnect firewalls can add an extra layer of defense to industry-standard security architectures like Arm’s TrustZone and Platform Security Architecture (PSA) framework, creating true system-level defense-in-depth.
A state-of-the-art NoC interconnect offers unparalleled visibility into on-chip dataflow which can be exploited to improve SoC QoS (and therefore, performance), safety and security. Arteris IP provides solutions across this spectrum through our NoC IP and related products. You can learn more here, here and here.
Leave a Reply