UNSPECIFIED (2023) Euclid preparation: XXIII. Derivation of galaxy physical properties with deep machine learning using mock fluxes and H-band images. Monthly Notices of the Royal Astronomical Society, 520 (3). pp. 3529-3548. ISSN 0035-8711
Bisigello_2023_draft.pdf - Accepted Version
Available under License Creative Commons Attribution.
Download (4MB)
Abstract
Next-generation telescopes, like Euclid, Rubin/LSST, and Roman, will open new windows on the Universe, allowing us to infer physical properties for tens of millions of galaxies. Machine-learning methods are increasingly becoming the most efficient tools to handle this enormous amount of data, because they are often faster and more accurate than traditional methods. We investigate how well redshifts, stellar masses, and star-formation rates (SFRs) can be measured with deep-learning algorithms for observed galaxies within data mimicking the Euclid and Rubin/LSST surveys. We find that deep-learning neural networks and convolutional neural networks (CNNs), which are dependent on the parameter space of the training sample, perform well in measuring the properties of these galaxies and have a better accuracy than methods based on spectral energy distribution fitting. CNNs allow the processing of multiband magnitudes together with $H_{\scriptscriptstyle \rm E}$-band images. We find that the estimates of stellar masses improve with the use of an image, but those of redshift and SFR do not. Our best results are deriving (i) the redshift within a normalized error of <0.15 for 99.9 ${{\ \rm per\ cent}}$ of the galaxies with signal-to-noise ratio >3 in the $H_{\scriptscriptstyle \rm E}$ band; (ii) the stellar mass within a factor of two ($\sim\!0.3 \rm \ dex$) for 99.5 ${{\ \rm per\ cent}}$ of the considered galaxies; and (iii) the SFR within a factor of two ($\sim\!0.3 \rm \ dex$) for $\sim\!70{{\ \rm per\ cent}}$ of the sample. We discuss the implications of our work for application to surveys as well as how measurements of these galaxy parameters can be improved with deep learning.