Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Publications of SPCL

J. Bazinska, A. Ivanov, T. Ben-Nun, N. Dryden, M. Besta, S. Shen, T. Hoefler:

 Cached Operator Reordering: A Unified View for Fast GNN Training

(arXiv:2308.12093. Aug. 2023)

Abstract

Graph Neural Networks (GNNs) are a powerful tool for handling structured graph data and addressing tasks such as node classification, graph classification, and clustering. However, the sparse nature of GNN computation poses new challenges for performance optimization compared to traditional deep neural networks. We address these challenges by providing a unified view of GNN computation, I/O, and memory. By analyzing the computational graphs of the Graph Convolutional Network (GCN) and Graph Attention (GAT) layers - two widely used GNN layers - we propose alternative computation strategies. We present adaptive operator reordering with caching, which achieves a speedup of up to 2.43x for GCN compared to the current state-of-the-art. Furthermore, an exploration of different caching schemes for GAT yields a speedup of up to 1.94x. The proposed optimizations save memory, are easily implemented across various hardware platforms, and have the potential to alleviate performance bottlenecks in training large-scale GNN models.

Documents

download article:
access preprint on arxiv:
 

BibTeX

@article{bazinska2023cached,
  author={Julia Bazinska and Andrei Ivanov and Tal Ben-Nun and Nikoli Dryden and Maciej Besta and Siyuan Shen and Torsten Hoefler},
  title={{Cached Operator Reordering: A Unified View for Fast GNN Training}},
  journal={arXiv:2308.12093},
  year={2023},
  month={08},
  doi={10.48550/arXiv.2308.12093},
}