中文 English

Building Multipurpose Systems With Dynamic Function Exchange Part Two: Bundling And Managing Resources

Enabling the hardware-accelerated functionality of three systems to be packed into the footprint of a single system.

popularity

In our previous article, we mentioned that one of the most-common oversights that designers can make is to not fully use available system resources, and we introduced you to the concept of Dynamic Function Exchange (DFX), a design approach that dynamically reallocates unused system resources to other tasks.

From a technical standpoint, implementing DFX is relatively straightforward using a concept called bundling. With bundling, blocks of functions are implemented together depending upon the operating mode of the system.

In an autonomous vehicle, for example, when the car is parked, it can be in “Parked” mode and ready to perform keyless entry and security processes such as biometric identification. “LowSpeed” mode is characterized by a slow velocity and provides functionality that focuses tightly around the car, including parking assist and 360 view for the driver. And “Highway” mode tracks and predicts the movement of surrounding and oncoming vehicles moving at high speeds. It also introduces driver monitoring to make sure the driver is attentive and not falling asleep. In short, bundling enables the hardware-accelerated functionality of three systems to be packed into the hardware footprint of a single system (see figure 1).


Fig. 1: Using Dynamic Function Exchange (DFX) to bundle capabilities based on use case, as shown here, makes it possible to implement the functionality of multiple SoCs and ASICs into a single device.

In practice, DFX is far more flexible than this. Consider that the Advanced Driver Assistance System (ADAS) can adapt to changing environmental factors such as night or day. When the vehicle enters a tunnel, DFX enables the system to quickly switch over to night-based functionality and algorithms to improve safety. The system could also detect rain or snow and adapt driver assistance functions appropriately to further increase safety and reliability based on real-time driving conditions.

With DFX, the same hardware can be used while providing hardware-based acceleration to meet real-time processing requirements. With a GPU and/or custom SoC approach, hardware is limited to the functions for which it has been designed, meaning any hardware dedicated to a rarely used function (i.e., snow driving) will sit idle most of the time.

DFX has been widely used and proven in applications like wireless communications and military applications such as software-defined (SD) radio. It is a common misconception that the flexibility of SD radio comes entirely from software; for different applications, different hardware is needed to accelerate processing.

Because of its flexible approach to design, DFX can implement complex systems with substantially fewer hardware resources. This results in a smaller footprint for electronic systems, lower energy consumption, operational cost savings, and lower equipment cost.

Higher quality and reliability

The ability of DFX to enable systems to reallocate processing resources and take advantage of underutilization allows designers to reconsider many system tradeoffs. Consider a content provider streaming live video from users. During peak usage, a particular server may be an ingest point for incoming video. When acceleration is implemented in hardware as an IC, the hardware is limited to the task for which it was designed. When there is less incoming video during off-peak times, the IC sits idle.

When ingest functionality is implemented in an adaptive computing platform using DFX, ingest tasks are processed at hardware-levels speeds. During off-peak when there is less data for the FPGA to ingest, rather than sit idle, the system can reconfigure part of itself to perform another task. The provider can take what would be idle resources in a fixed implementation (i.e., GPU/SoC) and allocate them to another task. This could take the form of using more compute-intensive encoding algorithms to conserve bandwidth or better pre-/post-production processing to improve image quality and deliver a higher quality user experience. In other words, what would otherwise be idle resources are increasing the value delivered to customers.

Alternatively, available resources could be allocated to self-diagnostic tasks such as monitoring. Monitoring is an important capability for maintaining network and application health. Any server with available capacity could monitor itself, perform deep packet inspection, etc. to increase operating reliability.

Adaptability and peak capacity

One of the most important benefits of DFX is adaptability. Fixed implementations have to overprovision capabilities to meet peak capacities. When the peak exceeds a certain threshold, the system can no longer handle the incoming load and additional hardware investment is required.

With an adaptive computing platform combined with DFX, designers have the flexibility to provision resources to optimize application performance in real-time as data and user requirements change. For example, as the system approaches full utilization, the system can scale back resources on less important tasks to free up resources to support the additional capacity.

Again, consider a live streaming content provider. When the network is running at average capacity, available resources can be allocated to provide superior quality across all streams. When the network is running at high peak, important streams (i.e., those with high viewership) can maintain their high-quality while the system trades off a slight quality drop in less important streams (i.e., those with low viewership) to support a higher density of streams.

Another benefit is the ability to adapt to bottlenecks as they shift. For example, as streaming demand rises and falls, resources can be dynamically reconfigured to provide more or less ingest capacity. As an adaptive computing platform, the FPGA becomes the optimal hardware ratio of hardware-accelerated resources needed for the applications and workloads that are currently needed.

Artificial intelligence

Artificial intelligence is an increasingly important technology across nearly all applications. In vehicles, for example, the introduction of AI to a rain or snow algorithm allows the system to learn to adapt to the specific weather conditions where a driver lives. With more advanced algorithms, the vehicle can even learn to adapt to each individual driver over time.

Fixed hardware acceleration using specialized ICs is limited in its flexibility to adapt. Often, AI inference models can be updated, so long as they rely upon the same base model technology. AI is also subject to dynamic loading. Consider an AI-based face recognition application that has high usage during the day when an office is active. During the evening when demand for face recognition is much lower, fixed resources will be idle.

DFX, combined with adaptive computing, allows a system to take maximum advantage of AI. Advances in AI algorithms can be implemented in a timely manner as they evolve. In addition, when a new AI algorithm or model is developed, it can be quickly implemented in systems. New algorithms can even be used in systems that are already deployed in the field, thus future-proofing designs.

DFX also enables a new dimension of optimization to systems through parallelization. Large data sets can take significant time to processing. With DFX, a dynamic number of instances of a function can be implemented in parallel to accelerate processing of large data sets.

In our final article, you’ll learn about some of the tools that enable users to deploy DFX in their systems.



Leave a Reply


(Note: This name will be displayed publicly)