Copyright Notice:
The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.
Publications of SPCL
T. Hoefler, C. Siebert, W. Rehm: | ||
A practically constant-time MPI Broadcast Algorithm for large-scale InfiniBand Clusters with Multicast (. Vol , Nr. , In Proceedings of the 21st IEEE International Parallel & Distributed Processing Symposium (CAC'07 Workshop), presented in Long Beach, CA, USA, pages 232, IEEE Computer Society, ISSN: , ISBN: 1-4244-0909-8, Mar. 2007) AbstractAn efficient implementation of the MPI BCAST operation is crucial for many parallel scientific applications. The hardware multicast operation seems to be applicable to switch-based InfiniBand cluster systems. Several approaches have been implemented so far, however there has been no production-ready code available yet. This makes optimal algorithms to a subject of active research. Some problems still need to be solved in order to bridge the semantic gap between the unreliable multicast and MPI BCAST. The biggest of those problems is to ensure the reliable data transmission in a scalable way. Acknowledgement-based methods that scale logarithmically with the number of participating MPI processes exist, but they do not meet the supernormal demand of high-performance computing. We propose a new algorithm that performs the MPI BCAST operation in a practically constant time, independent of the communicator size. This method is well suited for large communicators and (especially) small messages due to its good scaling and its ability to prevent parallel process skew. We implemented our algorithm as a collective component for the Open MPI framework using native InfiniBand multicast and we show its scalability on a cluster with 116 compute nodes, where it saves up to 41% MPI BCAST latency in comparison to the "TUNED" Open MPI collective.Documentsdownload article:download slides: | ||
BibTeX | ||
|