Home
TECHNICAL PAPERS

Checkmate: Breaking The Memory Wall With Optimal Tensor Rematerialization

popularity

Source: Published on arXiv 10/7/ 2019

 

  • Paras Jain
  • Ajay Jain
  • Aniruddha Nrusimha
  • Amir Gholami
  • Pieter Abbeel
  • Kurt Keutzer
  • Ion Stoica
  • Joseph E. Gonzalez

A recent paper published on arXiv by a team of UC Berkeley researchers notes that neural networks are increasingly impeded by the limited capacity of on-device GPU memory. The UC Berkeley team uses off-the-shelf numerical solvers to formulate optimal rematerialization strategies for arbitrary deep neural networks in TensorFlow with non-uniform computation and memory costs. In addition, the UC Berkeley team demonstrates how optimal rematerialization enables larger batch sizes and substantially reduced memory usage – with minimal computational overhead across a range of image classification and semantic segmentation architectures.

 

 



Leave a Reply


(Note: This name will be displayed publicly)