Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Publications of SPCL

T. Ben-Nun, M. Besta, S. Huber, A. Nikolaos Ziogas, D. Peter, T. Hoefler:

 A Modular Benchmarking Infrastructure for High-Performance and Reproducible Deep Learning

(IEEE, May 2019, Accepted at the 33rd IEEE International Parallel & Distributed Processing Symposium (IPDPS'19) )

Abstract

We introduce Deep500: the first customizable benchmarking infrastructure that enables fair comparison of the plethora of deep learning frameworks, algorithms, libraries, and techniques. The key idea behind Deep500 is its modular design, where deep learning is factorized into four distinct levels: operators, network processing, training, and distributed training. Our evaluation illustrates that Deep500 is customizable (enables combining and benchmarking different deep learning codes) and fair (uses carefully selected metrics). Moreover, Deep500 is fast (incurs negligible overheads), verifiable (offers infrastructure to analyze correctness), and reproducible. Finally, as the first distributed and reproducible benchmarking system for deep learning, Deep500 provides software infrastructure to utilize the most powerful supercomputers for extreme-scale workloads.

Documents

download article:
access preprint on arxiv:


Recorded talk (best effort)

 

BibTeX

@inproceedings{deep500,
  author={Tal Ben-Nun and Maciej Besta and S. Huber and Alexandros Nikolaos Ziogas and D. Peter and Torsten Hoefler},
  title={{A Modular Benchmarking Infrastructure for High-Performance and Reproducible Deep Learning}},
  year={2019},
  month={05},
  publisher={IEEE},
  note={Accepted at the 33rd IEEE International Parallel \& Distributed Processing Symposium (IPDPS'19) },
}