New Data Processing Module Makes Deep Neural Networks Smarter (Attentive Normalization)
Source: North Carolina State University. Authors: Xilai Li, Wei Sun, and Tianfu Wu
Abstract: “In state-of-the-art deep neural networks, both feature normalization and feature attention have become ubiquitous. They are usually studied as separate modules, however. In this paper, we propose a light-weight integration between the two schema and present Attentive Normalization (AN). Instead of learning a single affine transformation, AN learns a mixture of affine transformations and utilizes their weighted sum as the final affine transformation applied to re-calibrate features in an instance-specific way. The weights are learned by leveraging channel-wise feature attention.
In experiments, we test the proposed AN using four representative neural architectures in the ImageNet-1000 classification benchmark and the MS-COCO 2017 object detection and instance segmentation benchmark. AN obtains consistent performance improvement for different neural architectures in both benchmarks with absolute increase of top-1 accuracy in ImageNet-1000 between 0.5% and 2.7%, and absolute increase up to 1.8% and 2.2% for bounding box and mask AP in MS-COCO respectively. We observe that the proposed AN provides a strong alternative to the widely used Squeeze-and-Excitation (SE) module. The source codes are publicly available at the ImageNet Classification Repo (https://github.com/iVMCL/AOGNet-v2) and the MS-COCO Detection and Segmentation Repo (https://github.com/iVMCL/AttentiveNorm_Detection)”
Technical paper link can be found here.
While terms often are used interchangeably, they are very different technologies with different challenges.
The industry is gaining ground in understanding how aging affects reliability, but more variables make it harder to fix.
Key pivot and innovation points in semiconductor manufacturing.
Tools become more specific for Si/SiGe stacks, 3D NAND, and bonded wafer pairs.
Thinner photoresist layers, line roughness, and stochastic defects add new problems for the angstrom generation of chips.
Less precision equals lower power, but standards are required to make this work.
Open-source processor cores are beginning to show up in heterogeneous SoCs and packages.
While terms often are used interchangeably, they are very different technologies with different challenges.
New applications require a deep understanding of the tradeoffs for different types of DRAM.
Open source by itself doesn’t guarantee security. It still comes down to the fundamentals of design.
How customization, complexity, and geopolitical tensions are upending the global status quo.
127 startups raise $2.6B; data center connectivity, quantum computing, and batteries draw big funding.
The industry is gaining ground in understanding how aging affects reliability, but more variables make it harder to fix.
Leave a Reply