Self-Driving Architecture With eFPGAs

How to deal with rapidly changing algorithms without slowing down.

popularity

The favored self-driving architecture of the future will be increasingly decentralized. However, both the centralized and decentralized architectural design approaches will require hardware acceleration in the form of far more lookaside co-processing than is currently realized.

Whether centralized or decentralized, the anticipated computing architectures for automated and autonomous driving systems clearly will be heterogeneous. It will require a mix of processing resources used for tasks ranging in complexity from local-area-network control, translation, and bridging to parallel object recognition based on deep-learning algorithms running on neural networks. As a result, the current level of more than 100 CPUs found in luxury piloted vehicles could easily swell to several hundred CPUs and other processing elements for more advanced, autonomous vehicles.

Sensor hubs require lookaside image processing for warp and stitch effects. Ethernet networks require IP for packet filtering/monitoring and for special bridges to handle legacy CAN and FlexRay networks. And the power-hungry CPUs and GPUs used in first-generation autonomous automotive computing architectures will give way to highly-specialized compute nodes, using programmable acceleration, because these design alternatives deliver more processing power with far less power consumption.

At the same time, the greatly increased computing capabilities needed for automated and autonomous driving systems will require a similar boost in memory performance. A self-driving car’s AI system requires a continuous, uninterrupted stream of data and instructions in order to make real-time decisions based on complex data sets.

According to Robert Bielby, Micron Technology’s senior director responsible for automotive system architecture in the company’s embedded business unit, there is a growing memory bottleneck in current automated and autonomous driving systems designs. He already is seeing industry momentum towards the adoption of GDDR6 DRAM to address that bottleneck. Bielby predicts that by the time automated and autonomous driving systems need more than 200 Gbps of memory bandwidth, that GDDR6 memories will provide the lowest cost DRAM (per bit), at power levels equivalent to LPDDR5 DRAM.

The growing need for far more computing power and more memory bandwidth strongly suggests that future designs for automated and autonomous driving systems increasingly will use ASIC and SoC technologies to achieve the specific power, performance, and cost objectives of these extremely demanding automotive system designs. Conventionally, ASICs and SoCs lack the hardware flexibility needed for dealing with situations where the most critical algorithms are changing rapidly, as is the case for self-driving automotive systems. The most direct path to incorporating flexible, programmable processing elements into ASICs and SoCs is through the addition of embedded FPGA (eFPGA) IP cores.

The configurable processing capabilities achieved by integrating Achronix’s Speedcore eFPGA IP into ASICs and SoCs optimize real estate and power efficiency, and represent a superior design choice for implementing coprocessing in future automotive platforms when compared to fixed-function SoCs and traditional FPGAs. To learn more about this evolution in processing, see EFPGA Acceleration in SoCs — Understanding the Speedcore IP Design Process (WP008).



Leave a Reply


(Note: This name will be displayed publicly)