Home
TECHNICAL PAPERS

Improving the Performance Of Deep Neural Networks

New Data Processing Module Makes Deep Neural Networks Smarter (Attentive Normalization)

popularity

Source: North Carolina State University. Authors: Xilai Li, Wei Sun, and Tianfu Wu

Abstract: “In state-of-the-art deep neural networks, both feature normalization and feature attention have become ubiquitous. They are usually studied as separate modules, however. In this paper, we propose a light-weight integration between the two schema and present Attentive Normalization (AN). Instead of learning a single affine transformation, AN learns a mixture of affine transformations and utilizes their weighted sum as the final affine transformation applied to re-calibrate features in an instance-specific way. The weights are learned by leveraging channel-wise feature attention.

In experiments, we test the proposed AN using four representative neural architectures in the ImageNet-1000 classification benchmark and the MS-COCO 2017 object detection and instance segmentation benchmark. AN obtains consistent performance improvement for different neural architectures in both benchmarks with absolute increase of top-1 accuracy in ImageNet-1000 between 0.5% and 2.7%, and absolute increase up to 1.8% and 2.2% for bounding box and mask AP in MS-COCO respectively. We observe that the proposed AN provides a strong alternative to the widely used Squeeze-and-Excitation (SE) module. The source codes are publicly available at the ImageNet Classification Repo (https://github.com/iVMCL/AOGNet-v2) and the MS-COCO Detection and Segmentation Repo (https://github.com/iVMCL/AttentiveNorm_Detection)”

Technical paper link can be found here.



Leave a Reply


(Note: This name will be displayed publicly)