Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Publications of SPCL

M. Besta, T. Hoefler:

 Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis

(IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol 46, Nr. 5, pages 2584-2606, IEEE Press, May 2024)

Publisher Reference

Abstract

Graph neural networks (GNNs) are among the most powerful tools in deep learning. They routinely solve complex problems on unstructured networks, such as node classification, graph classification, or link prediction, with high accuracy. However, both inference and training of GNNs are complex, and they uniquely combine the features of irregular graph processing with dense and regular computations. This complexity makes it very challenging to execute GNNs efficiently on modern massively parallel architectures. To alleviate this, we first design a taxonomy of parallelism in GNNs, considering data and model parallelism, and different forms of pipelining. Then, we use this taxonomy to investigate the amount of parallelism in numerous GNN models, GNN-driven machine learning tasks, software frameworks, or hardware accelerators. We use the work-depth model, and we also assess communication volume and synchronization. We specifically focus on the sparsity/density of the associated tensors, in order to understand how to effectively apply techniques such as vectorization. We also formally analyze GNN pipelining, and we generalize the established Message-Passing class of GNN models to cover arbitrary pipeline depths, facilitating future optimizations. Finally, we investigate different forms of asynchronicity, navigating the path for future asynchronous parallel GNN pipelines. The outcomes of our analysis are synthesized in a set of insights that help to maximize GNN performance, and a comprehensive list of challenges and opportunities for further research into efficient GNN computations. Our work will help to advance the design of future GNNs.

Documents

download article:
access preprint on arxiv:
 

BibTeX

@article{besta2024parallel,
  author={Maciej Besta and Torsten Hoefler},
  title={{Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis}},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2024},
  month={05},
  pages={2584-2606},
  volume={46},
  number={5},
  publisher={IEEE Press},
  doi={10.1109/TPAMI.2023.3303431},
}