Transfer learning for galaxy feature detection : Finding giant star-forming clumps in low-redshift galaxies using Faster Region-based Convolutional Neural Network

Popp, Jürgen J and Dickinson, Hugh and Serjeant, Stephen and Walmsley, Mike and Adams, Dominic and Fortson, Lucy and Mantha, Kameswara and Mehta, Vihang and Dawson, James M and Kruk, Sandor and Simmons, Brooke (2024) Transfer learning for galaxy feature detection : Finding giant star-forming clumps in low-redshift galaxies using Faster Region-based Convolutional Neural Network. RAS Techniques and Instruments, 3 (1). pp. 174-197. ISSN 2752-8200

Full text not available from this repository.

Abstract

Giant star-forming clumps (GSFCs) are areas of intensive star-formation that are commonly observed in high-redshift (z ≳ 1) galaxies but their formation and role in galaxy evolution remain unclear. Observations of low-redshift clumpy galaxy analogues are rare but the availability of wide-field galaxy survey data makes the detection of large clumpy galaxy samples much more feasible. Deep Learning (DL), and in particular Convolutional Neural Networks (CNNs), have been successfully applied to image classification tasks in astrophysical data analysis. However, one application of DL that remains relatively unexplored is that of automatically identifying and localizing specific objects or features in astrophysical imaging data. In this paper, we demonstrate the use of DL-based object detection models to localize GSFCs in astrophysical imaging data. We apply the Faster Region-based Convolutional Neural Network object detection framework (FRCNN) to identify GSFCs in low-redshift (z ≲ 0.3) galaxies. Unlike other studies, we train different FRCNN models on observational data that was collected by the Sloan Digital Sky Survey and labelled by volunteers from the citizen science project ‘Galaxy Zoo: Clump Scout’. The FRCNN model relies on a CNN component as a ‘backbone’ feature extractor. We show that CNNs, that have been pre-trained for image classification using astrophysical images, outperform those that have been pre-trained on terrestrial images. In particular, we compare a domain-specific CNN – ‘Zoobot’ – with a generic classification backbone and find that Zoobot achieves higher detection performance. Our final model is capable of producing GSFC detections with a completeness and purity of ≥0.8 while only being trained on ∼5000 galaxy images.

Item Type:
Journal Article
Journal or Publication Title:
RAS Techniques and Instruments
Subjects:
?? object detectiondeep learningmachine learningtransfer learninggalaxies: structuredata methods ??
ID Code:
218403
Deposited By:
Deposited On:
18 Apr 2024 10:25
Refereed?:
Yes
Published?:
Published
Last Modified:
19 Apr 2024 02:47