Systems & Design
SPONSOR BLOG

The High-Speed Virtual Highway

Having sufficient speed to keep SoC traffic flowing is critical. Making that work requires a high degree of concurrency.

popularity

By Frank Ferro
By now it’s safe to say that complex, high-speed design is no longer a riddle….at least in theory.

We all know the end game. In its most fundamental form, isn’t it really a designer’s negotiation and compromise with the end user that comes down to action and reaction? We know users demand more and more applications to run simultaneously on their smart devices. We know that the underlying SoC in every device must sufficiently accommodate multiple data streams for each unique application. This is driving key architectural decisions for multi-core, multi-processor SoCs. Given that, how do we limit the amount of ‘compromise’ by the designer and give the customer more of that they want?

Having the ability to build an efficient, high-performance “network” is a critical part of the overall SoC architecture decision—ensuring that traffic volume, i.e. requests from multiple processor cores including the CPU, GPU, DSP and video processor, are given the adequate bandwidth. To design an optimal system, the network must first be able to handle the speed of the processors and DRAM. In addition, it must efficiently manage the data flow in order to manage concurrent processes.

With processor speeds climbing, having a network-on-chip (NoC) that can support GHz speeds has now moved beyond optional to essential as processors—even for mobile applications—are starting to approach or even surpass 2GHz. This high-speed network allows the processor to maximize performance and provides the SoC designer with the extra headroom needed as new functions are added and as processor speeds increase.

There is no disputing that the NoC speed is critical to meet SoC connectivity requirements. But the NoC must now go even further to ensure the maximum number of concurrent processes can be supported in order to provide the best user experience at the application level. A network that combines the highest frequency and advanced concurrency support will enable SoC designers to run multiple, high-speed applications simultaneously, which is the price of admission for today’s smart devices. At the end of the day, designers need to efficiently manage all their essential application concurrency to keep pace with the increased capabilities demanded by consumers.

So what are some of the ways that the network can support concurrent processes? One method is to design a network that has more spatially concurrent data paths. This is analogous to adding more lanes to a highway to eliminate any potential bottlenecks and increase traffic flow. The advantage of having more lanes is that it will provide predictable service at peak traffic times. The disadvantage, however, is that the system may be over-provisioned, wasting chip area and power during times when the bandwidth is not needed.

A more efficient solution is to design a system that takes advantage of the fact that 100% peak bandwidth utilization is not always needed. Staying with the highway analogy, think of those roads that change the direction of the lanes depending on the time of day. The concept of using “virtual” channels is similar. A virtual channel takes advantage of the fact that all resources in the system are not always utilized. With proper flow control, forward progress can be guaranteed for a unique data stream even over a shared resource. The result of using virtual channels is optimal system performance, while saving chip area because buffers and wires can be shared. Given the pressure on cost and power, the ability to save gates and wires while meeting the system performance is critical for success.

Another advantage of virtual channels with advanced flow control is that it has non-blocking properties. This means that traffic is never “blocked” from entering the network. Using just one more traffic analogy, a typical network will “meter” data flowing into the network like traffic lights at the highway on-ramps at rush hour. You have to wait and there will be a delay. With the non-blocking flow, control data is always permitted to enter the network and make progress, therefore maximizing the overall system concurrency.

Clearly designers are concerned with the SoC performance as it relates to the end application. To give the end-user more application concurrency, it is imperative to choose your on-chip network architecture carefully. Having a high-speed NoC is an important component in the overall network-on-chip design, allowing high-speed data to be moved to the various subsystem components in the network. You also must be sure that the network can deal efficiently with multiple data streams and not “block” traffic flow into the network, which is what other NoCs do today.

So if designers want to get in the fast lane or pass their competitors on the left, they will need a high-performance NoC with an architecture that supports a high degree of concurrency as a key component in the overall system design.

–Frank Ferro is director of marketing at Sonics.


Tags:

Leave a Reply


(Note: This name will be displayed publicly)