Debeyan, Fahad Al and Hall, Tracy and Madeyski, Lech (2025) Emerging Results in Using Explainable AI to Improve Software Vulnerability Prediction. In: FSE Companion '25 : Proceedings of the 33rd ACM International Conference on the Foundations of Software Engineering. ACM, New York, pp. 561-565. ISBN 9798400712760
Full text not available from this repository.Abstract
Explainable Artificial Intelligence (XAI) has recently been applied to vulnerability prediction models to understand the decisions made and to improve the transparency of those models. We are the first to leverage XAI explanations to improve vulnerability prediction performance. The performance of vulnerability prediction models relies on the quality of the vulnerability dataset and the machine learning model. We use XAI information to identify biases in vulnerability prediction datasets and limitations in deep learning-based prediction models. Our XAI analysis is based on using a state-of-the-art deep-learning vulnerability prediction model (LineVul) and an explainability algorithm (Layered Integrated Gradients) to generate XAI information. The XAI information that we generated allowed us to improve our understanding of how our models worked, such that we were able to identify important improvement opportunities. Consequently, we present some surprising findings: while LineVul accurately predicted vulnerable functions, in 43% of cases, the use of XAI data allowed us to identify that those predictions were based on dataset biases rather than on actual vulnerable lines. By systematically removing these dataset biases, we achieved a notable performance improvement, increasing LineVul's F-Measure from 92% to 96%. Additionally, the insight we gained from XAI also allowed us to identify a fundamental limitation in LineVul's reliance on CodeBERT, a pre-trained language model limited to 512 tokens. By integrating LongCoder, a pre-trained model capable of processing longer sequences, we achieved an F-measure and MCC increase from 92% and 91%, respectively, to 94%, highlighting the potential for improved handling of complex, long-sequence vulnerabilities. We conclude that XAI has important additional applications that go beyond providing users with information describing the basis of predictions.
Altmetric
Altmetric