Borowiec, Damian and Yeung, Ging-Fung and Friday, Adrian and Harper, R.H.R. and Garraghan, Peter (2023) DOPpler: Parallel Measurement Infrastructure for Auto-tuning Deep Learning Tensor Programs. IEEE Transactions on Parallel and Distributed Systems, 34 (7). pp. 2208-2220. ISSN 1045-9219
TPDS_DOPpler_1_.pdf - Accepted Version
Available under License Creative Commons Attribution.
Download (3MB)
Abstract
The heterogeneity of Deep Learning models, libraries, and hardware poses an important challenge for improving model inference performance. Auto-tuners address this challenge via automatic tensor program optimization towards a target-device. However, auto-tuners incur a substantial time cost to complete given their design necessitates performing tensor program candidate measurements serially within an isolated target-device to minimize latency measurement inaccuracy. In this article we propose DOPpler, a parallel auto-tuning measurement infrastructure. DOPpler allows for considerable auto-tuning speedup over conventional approaches whilst maintaining high-quality tensor program optimization. DOPpler accelerates the auto-tuning process by proposing a parallel execution engine to efficiently execute candidate tensor programs in parallel across the CPU-host and GPU target-device, and overcomes measurement inaccuracy by introducing a high-precision on-device measurement technique when measuring tensor program kernel latency. DOPpler is designed to automatically calculate the optimal degree of parallelism to provision fast and accurate auto-tuning for different tensor programs, auto-tuners and target-devices. Experiment results show that DOPpler reduces total auto-tuning time by 50.5% on average whilst achieving optimization gains equivalent to conventional auto-tuning infrastructure.