Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Publications of SPCL

M. Besta, A. Claudino Catarino, L. Gianinazzi, N. Blach, P. Nyczyk, H. Niewiadomski, T. Hoefler:

 HOT: Higher-Order Dynamic Graph Representation Learning with Efficient Transformers

(In Proceedings of the Learning on Graphs Conference (LOG'23), presented in Virtual, Nov. 2023)

Publisher Reference

Abstract

Many graph representation learning (GRL) problems are dynamic, with millions of edges added or removed per second. A fundamental workload in this setting is dynamic link prediction: using a history of graph updates to predict whether a given pair of vertices will become connected. Recent schemes for link prediction in such dynamic settings employ Transformers, modeling individual graph updates as single tokens. In this work, we propose HOT: a model that enhances this line of works by harnessing higher-order (HO) graph structures; specifically, k-hop neighbors and more general subgraphs containing a given pair of vertices. Harnessing such HO structures by encoding them into the attention matrix of the underlying Transformer results in higher accuracy of link prediction outcomes, but at the expense of increased memory pressure. To alleviate this, we resort to a recent class of schemes that impose hierarchy on the attention matrix, significantly reducing memory footprint. The final design offers a sweetspot between high accuracy and low memory utilization. HOT outperforms other dynamic GRL schemes, for example achieving 9%, 7%, and 15% higher accuracy than – respectively – DyGFormer, TGN, and GraphMixer, for the MOOC dataset. Our design can be seamlessly extended towards other dynamic GRL workloads.

Documents

download article:
access preprint on arxiv:
download slides:
 

BibTeX

@inproceedings{besta2023hot,
  author={Maciej Besta and Afonso Claudino Catarino and Lukas Gianinazzi and Nils Blach and Piotr Nyczyk and Hubert Niewiadomski and Torsten Hoefler},
  title={{HOT: Higher-Order Dynamic Graph Representation Learning with Efficient Transformers}},
  year={2023},
  month={11},
  booktitle={Proceedings of the Learning on Graphs Conference (LOG'23)},
  location={Virtual},
  doi={10.48550/arXiv.2311.18526},
}