CoMPI Project Page at UIUC (Co-PI Torsten Hoefler)

Description


This is the page related to the ASCR DOE X-Stack software research project "Compiled MPI" (short: compi) at the University of Illinois at Urbana-Champaign (UIUC) led by Torsten Hoefler. This project is a joint project with Lawrence Livermore National Laboratory (LLNL, Co-PIs Dan Quinlan and Greg Bronevetsky) and Indiana University (IU, Co-PI Andrew Lumsdaine).


UIUC and IU are responsible for runtime optimization and integration while LLNL handles the compiler infrastructure based on ROSE and the transformations.


Grad Student RAs Needed

We are actively looking for grad student RAs at UIUC for the following two sub-projects: (1) datatype optimizations and (2) communication optimizations. A short description follows below, please consult Torsten Hoefler if you have questions or are interested to work on any of the projects.

Communication Optimization

This project deals with the static and dynamic optimization of communication schedules. Communication schedules are a set of communication operations and dependencies and define an order of their execution. A set of such operations and dependencies form a global communication graph. The goal of this project is to optimize the communication graph in a given model (e.g., LogGP) and to compare the quality of solutions.


For example, a broadcast communication from node 0 to nodes 1..3 can be expressed as the set {(0,1), (0,2), (0,3), (0,4)} (where a tuple (x,y) represents communication from x to y) or {(0,1}, (1,3), (0,2)} in a tree-like shape. Using a broadcast tree is more efficient in this trivial example. The project aims to develop model-based optimization techniques for the optimization of such communication operation represented in the tuple-form above. The main work is to develop and proof optimality of algorithms working on this tuple-form using well-known communication models such as LogGP.

Reaching optimality is generally very hard. We plan to follow three avenues: (1) analytical algorithms and proofs, (2) well-known optimization methods (linear or integer optimization), and (3) heuristics and learning-based methods. The results should be implemented in an MPI-like library.


The student working on this project should know what MPI is, be familiar with the C and C++ programming languages and he should be very familiar with linear optimization, (mixed) integer programming and basic network models. The student should understand the papers Alexandrov et al. "LogGP: incorporating long messages into the LogP model - one step closer towards a realistic model for parallel computation" and Bruck et al. "Efficient Algorithms for All-to-All Communications in Multi-Port Message-Passing Systems" well.


For previous work in this area see references [3].


MPI Shared Memory Optimization

The goal is to optimize the implementation of MPI implementations for shared memory supercomputers. The student working on this project should know MPI and he should be very familiar with the C/C++ programming languages and computer architecture. Please contact Torsten Hoefler for more information and if you're a student at UIUC and are interested in this project. For previous work in this area see references [1,2].

References

EuroMPI'11
[1] W. Gropp, T. Hoefler, R. Thakur and J. L. Traeff:
 Performance Expectations and Guidelines for MPI Derived Datatypes Vol 6960, In Recent Advances in the Message Passing Interface (EuroMPI'11), presented in Santorini, Greece, pages 150-159, Springer, ISBN: 978-3-642-24448-3, Sep. 2011,
EuroMPI'10
[2] T. Hoefler and S. Gottlieb:
 Parallel Zero-Copy Algorithms for Fast Fourier Transform and Conjugate Gradient using MPI Datatypes Vol LNCS 6305, In Recent Advances in the Message Passing Interface (EuroMPI'10), presented in Stuttgart, Germany, pages 132--141, Springer, ISSN: 0302-9743, ISBN: 078-3-642-15645-8, Sep. 2010,
ICPP'09
[3] T. Hoefler, C. Siebert and A. Lumsdaine:
 Group Operation Assembly Language - A Flexible Way to Express Collective Communication In ICPP-2009 - The 38th International Conference on Parallel Processing, presented in Vienna, Austria, IEEE, ISBN: 978-0-7695-3802-0, Sep. 2009, (acceptance rate 32%, 71/220)