Home
WHITEPAPERS

Convolutional Neural Network With INT4 Optimization

INT8 provides better performance with comparable precision than floating point for AI inference. But when INT8 is unable to meet the desired performance with limited resources, INT4 optimization is the answer. This INT4 optimization achieves up to a 77% performance boost on real hardware in comparison with the current INT8 solution.

popularity

Xilinx provides an INT8 AI inference accelerator on Xilinx hardware platforms — Deep Learning Processor Unit (XDPU). However, in some resource-limited, high-performance and low-latency scenarios (such as the resource-power-sensitive edge side and low-latency ADAS scenario), low bit quantization of neural networks is required to achieve lower power consumption and higher performance than provided by INT8. However, extremely low bit quantization (such as binary or ternary) has accuracy degradation.

Thus, a full-process hardware-friendly quantization solution of 4-bit activations and 4-bit weights (4A4W) achieves better accuracy/resource trade-off. This white paper describes the implementation of a low-precision accelerator for CNN 4-bit XDPU on the Zynq UltraScale+ MPSoC and Zynq-7000 SoC families (16nm and 28nm), which takes full advantage of its DSP capabilities by efficiently mapping convolutional computations. This solution achieves 2X solution-level performance over the XDPU. On a 2D detection task in an ADAS system, the implementation achieves an inference speed of 230fps on a Zynq UltraScale+ MPSoC ZCU102 board, which is a 1.52X performance gain over the 8-bit XDPU. In addition, this solution achieves comparable results to full precision models on different tasks of the ADAS system.

Click here to read more.



Leave a Reply


(Note: This name will be displayed publicly)