Galaxy mergers classification using CNNs trained on Sersic models, residuals and raw images

Galaxy mergers play a key role in shaping galaxy evolution, and with the advent of large-scale astronomical surveys, automated techniques such as Convolutional Neural Networks (CNNs) are becoming essential for identifying these events efficiently. CNNs are believed to detect mergers by recognizing deviations from typical galaxy morphologies—particularly faint features that signal recent interactions. In this work, we explore the relative importance of spatial morphological structure versus faint residual features in CNN-based merger classification. Using mock observations from the IllustrisTNG simulations tailored to resemble Hyper Suprime-Cam (HSC) data, we conduct a controlled experiment with three datasets. We generate model-subtracted residual images by fitting Sersic profiles to galaxies and then train three identical CNNs: one on the original images, one on model images, and one on residual images. Our findings indicate that merger classification can be achieved using either morphological or residual information alone. CNNs trained on original, model, and residual images achieve classification accuracies of 74%, 70%, and 68%, respectively. These results suggest that spatial and residual features each provide distinct and complementary information for merger identification. This study highlights the potential of decomposed image components to improve our understanding of how deep learning models identify complex astrophysical phenomena, and it paves the way for more interpretable and targeted approaches in future galaxy classification efforts.