The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.
Publications of SPCL
|Towards Remote Memory Access Programming for Data Analytics|
(Presentation - presented in Berkeley, CA, USA, Jun. 2015, )
AbstractRemote memory access (RMA) or partitioned global address space programming offers abstractions to coordinate directly accessible distributed memory domains. We give a brief introduction to MPI-3's RMA and discuss our reference implementation for Cray machines. We then show results with up to half a million processes. We continue by addressing producer-consumer synchronizations in task-based runtime environments and our new proposal for notified access. Remote memory access enables efficient implementations of various algorithms. We continue to investigate shared and distributed memory graph computations using our Atomic Active Message (AAM) abstraction accelerated by hardware transactional memory. We illustrate techniques such as coarsening and coalescing that enable hardware transactions to achieve considerable speedups in graph processing. We conduct a detailed performance analysis of AAM on Intel Haswell and IBM Blue Gene/Q and we illustrate various performance tradeoff between different HTM parameters that impact the efficiency of graph processing. AAM can be used to implement abstractions offered by existing programming models and to improve the performance of graph analytics codes such as Graph500 or Galois. Overall, we advocate RMA as a potential programming model for scalable systems ranging from single-die multicores to large-scale supercomputers.