Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Publications of SPCL

A. Nigay, L. Mosimann, T. Schneider, T. Hoefler:

 Communication and Timing Issues with MPI Virtualization

(In 27th European MPI Users' Group Meeting, presented in Austin, TX, USA, pages 11–20, Association for Computing Machinery, ISBN: 9781450388801, Sep. 2020)

Publisher Reference

Abstract

Computation–communication overlap and good load balance are features central to high performance of parallel programs. Unfortunately, achieving them with MPI requires considerably increasing the complexity of user code. Our work contributes to the alternative solution to this problem: using a virtualized MPI implementation. Virtualized MPI implementations diverge from traditional MPI implementations in that they map MPI processes to user-level threads instead of operating-system processes and launch more of them than there are CPU cores in the system. They are capable of providing automatic computation–communication overlap and load balance with little to no changes to pre-existing MPI user code. Our work has uncovered new insights into MPI virtualization: Two new kinds of timers are needed: an MPI-process timer and a CPU-core timer, the same discussion also applies to performance counters and the MPI profiling interface. We also observe an interplay between the degree of CPU oversubscription and the rendezvous communication protocol: we find that the intuitive expectation of only two MPI processes per CPU core being enough to achieve full computation–communication overlap is wrong for the rendezvous protocol—instead, three MPI processes per CPU core are required in that case. Our findings are expected to be applicable to all virtualized MPI implementations as well as to general tasking runtime systems.

Documents

download article:
 

BibTeX

@inproceedings{nigay-virt-mpi,
  author={Alexandr Nigay and Lukas Mosimann and Timo Schneider and Torsten Hoefler},
  title={{Communication and Timing Issues with MPI Virtualization}},
  year={2020},
  month={09},
  pages={11–20},
  booktitle={27th European MPI Users' Group Meeting},
  location={Austin, TX, USA},
  publisher={Association for Computing Machinery},
  isbn={9781450388801},
  doi={10.1145/3416315.3416317},
}