Auto-tuning Streamed Applications on Intel Xeon Phi

Zhang, Peng and Fang, Jianbin and Tang, Tao and Yang, Canqun and Wang, Zheng (2018) Auto-tuning Streamed Applications on Intel Xeon Phi. In: 2018 IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE, pp. 515-525. ISBN 9781538643686

[img]
Preview
PDF (ipdps18)
ipdps18.pdf - Submitted Version
Available under License Creative Commons Attribution.

Download (5MB)
[img]
Preview
PDF (bare_conf)
bare_conf.pdf - Accepted Version
Available under License Creative Commons Attribution.

Download (5MB)

Abstract

Many-core accelerators, as represented by the XeonPhi coprocessors and GPGPUs, allow software to exploit spatial and temporal sharing of computing resources to improve the overall system performance. To unlock this performance potential requires software to effectively partition the hardware resource to maximize the overlap between host-device communication and accelerator computation, and to match the granularity of task parallelism to the resource partition. However, determining the right resource partition and task parallelism on a per program, per dataset basis is challenging. This is because the number of possible solutions is huge, and the benefit of choosing the right solution may be large, but mistakes can seriously hurt the performance. In this paper, we present an automatic approach to determine the hardware resource partition and the task granularity for any given streamed application, targeting the Intel XeonPhi architecture. Instead of hand-crafting the heuristic for which the process will have to repeat for each hardware generation, we employ machine learning techniques to automatically learn it. We achieve this by first learning a predictive model offline using training programs; we then use the learned model to predict the resource partition and task granularity for any unseen programs at runtime. We apply our approach to 23 representative parallel applications and evaluate it on a CPU-XeonPhi mixed heterogenous many-core platform. Our approach achieves, on average, a 1.6x (upto 5.6x) speedup, which translates to 94.5% of the performance delivered by a theoretically perfect predictor.

Item Type:
Contribution in Book/Report/Proceedings
Additional Information:
©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE
ID Code:
89799
Deposited By:
Deposited On:
22 Jan 2018 16:56
Refereed?:
Yes
Published?:
Published
Last Modified:
19 Sep 2020 07:05