Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Publications of SPCL

T. Hoefler, D. Alistarh, T. Ben-Nun, N. Dryden, A. Peste:

 Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks

(Journal of Machine Learning Research. Vol 22, Nr. 241, pages 1-124, Sep. 2021)

Abstract

The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.

Documents

download article:
access preprint on arxiv:


Recorded talk (best effort)

 

BibTeX

@article{sparsity-in-dl,
  author={Torsten Hoefler and Dan Alistarh and Tan Ben-Nun and Nikoli Dryden and Alexandra Peste},
  title={{Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks}},
  journal={Journal of Machine Learning Research},
  year={2021},
  month={09},
  pages={1-124},
  volume={22},
  number={241},
}