The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.
Publications of SPCL
|M. Ritter, A. Calotoiu, T. Reimann, T. Hoefler, F. Wolf:|
|Performance Modeling at a Discount|
(presented in New Orleans, LA, USA, IEEE, May 2020, Accepted at the 34th IEEE International Parallel & Distributed Processing Symposium (IPDPS'20) )
AbstractPerformance models are vital to the identification of scalability bottlenecks in parallel applications. They describe key performance metrics such as the execution time as a function of one or more parameters, such as the number of processes or the input size. Whereas analytical models must be laboriously derived from the source code by reasoning, their empirical siblings can be quickly learned from a set of performance experiments. Obviously, both the quality and the cost of empirical models depend on the design of the underlying experiments. Extra-P, a state-of-the-art modeling tool, requires experiments that represent all combinations of all parameter values. Hence, the number of samples it needs grows exponentially with the number of model parameters. In some situations, this makes empirical performance models impractical to create. In this paper, we propose a novel parameter value selection heuristic, which we adopt from a reinforcement learning agent, that needs only a polynomial number of samples and allows a more flexible experiment design. Using synthetic analysis and data from three different case studies, we show that our solution reduces the average modeling costs by up to 98% while retaining 99% of the model accuracy.
access preprint on arxiv: