Highly-parallelized simulation of a pixelated LArTPC on a GPU

Blake, A. and Brailsford, D. and Cross, R. and Mouster, G. and Nowak, J. A. and Ratoff, P. (2023) Highly-parallelized simulation of a pixelated LArTPC on a GPU. Journal of Instrumentation, 18: P04034. ISSN 1748-0221

[thumbnail of 2212.09807]
Text (2212.09807)
2212.09807.pdf - Accepted Version
Available under License Creative Commons Attribution.

Download (15MB)

Abstract

The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time project chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on $10^3$ pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.

Item Type:
Journal Article
Journal or Publication Title:
Journal of Instrumentation
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/3100/3105
Subjects:
?? physics.comp-phphysics.ins-detinstrumentationmathematical physics ??
ID Code:
192212
Deposited By:
Deposited On:
28 Apr 2023 10:45
Refereed?:
Yes
Published?:
Published
Last Modified:
09 Apr 2024 00:18