Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Publications of SPCL

S. Li, T. Ben-Nun, G. Nadiradze, S. Di Girolamo, N. Dryden, D. Alistarh, T. Hoefler:

 Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging

(IEEE Transactions on Parallel and Distributed Systems. Vol 32, Nr. 7, pages 1725-1739, IEEE, 2021)

Publisher Reference

Abstract

Deep learning at scale is dominated by communication time. Distributing samples across nodes usually yields the best performance, but poses scaling challenges due to global information dissemination and load imbalance across uneven sample lengths. State-of-the-art decentralized optimizers mitigate the problem, but require more iterations to achieve the same accuracy as their globally-communicating counterparts. We present Wait-Avoiding Group Model Averaging (WAGMA) SGD, a wait-avoiding stochastic optimizer that reduces global communication via subgroup weight exchange. The key insight is a combination of algorithmic changes to the averaging scheme and the use of a group allreduce operation. We prove the convergence of WAGMA-SGD, and empirically show that it retains convergence rates equivalent to Allreduce-SGD. For evaluation, we train ResNet-50 on ImageNet; Transformer for machine translation; and deep reinforcement learning for navigation at scale. Compared with state-of-the-art decentralized SGD variants, WAGMA-SGD significantly improves training throughput (e.g., 2.1x on 1,024 GPUs for reinforcement learning), and achieves the fastest time-to-solution (e.g., the highest score using the shortest training time for Transformer).

Documents

download article:
 

BibTeX

@article{,
  author={Shigang Li and Tal Ben-Nun and Giorgi Nadiradze and Salvatore Di Girolamo and Nikoli Dryden and Dan Alistarh and Torsten Hoefler},
  title={{Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging}},
  journal={IEEE Transactions on Parallel and Distributed Systems},
  year={2021},
  pages={1725-1739},
  volume={32},
  number={7},
  publisher={IEEE},
  doi={10.1109/TPDS.2020.3040606},
}