Phillips, Jesse and El-Haj, Mo and Hall, Tracy (2024) Metric-Oriented Pretraining of Neural Source Code Summarisation Transformers to Enable more Secure Software Development. In: The First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security, 2024-07-29 - 2024-07-30, Lancaster University.
NLPAICS_proceedings_-_Phillips_et_al.pdf - Published Version
Download (489kB)
Abstract
Source code summaries give developers and maintainers vital information about source code methods. These summaries aid with the security of software systems as they can be used to improve developer and maintainer understanding of code, with the aim of reducing the number of bugs and vulnerabilities. However writing these summaries takes up the developers’ time and these summaries are often missing, incomplete, or outdated. Neural source code summarisation solves these issues by summarising source code automatically. Current solutions use Transformer neural networks to achieve this. We present CodeSumBART - a BART-base model for neural source code summarisation, pretrained on a dataset of Java source code methods and English method summaries. We present a new approach to training Transformers for neural source code summarisation. We found that in our approach, using larger n-gram precision BLEU metrics for epoch validation, such as BLEU-4, produces better performing models than other common NLG metrics.