Improved Evaluation of Automatic Source Code Summarisation

Phillips, Jesse and Bowes, David and El-Haj, Mahmoud and Hall, Tracy (2022) Improved Evaluation of Automatic Source Code Summarisation. In: 2nd Wokshop on Natural Language Generation, Evaluation and Metrics : Proceedings of the Workshop. Association for Computational Linguistics (ACL Anthology), ARE, pp. 326-335. ISBN 9781959429128

Full text not available from this repository.

Abstract

Source code summaries are a vital tool for the understanding and maintenance of source code as they can be used to explain code in simple terms. However, source code with missing, incorrect, or outdated summaries is a common occurrence in production code. Automatic source code summarisation seeks to solve these issues by generating up-to-date summaries of source code methods. Recent work in automatically generating source code summaries uses neural networks for generating summaries; commonly Sequence-to-Sequence or Transformer models, pretrained on method-summary pairs. The most common method of evaluating the quality of these summaries is comparing the machine-generated summaries against human-written summaries. Summaries can be evaluated using n-gram-based translation metrics such as BLEU, METEOR, or ROUGE-L. However, these metrics alone can be unreliable and new Natural Language Generation metrics based on large pretrained language models provide an alternative. In this paper, we propose a method of improving the evaluation of a model by improving the preprocessing of the data used to train it, as well as proposing evaluating the model with a metric based off a language model, pretrained on a Natural Language (English) alongside traditional metrics. Our evaluation suggests our model has been improved by cleaning and preprocessing the data used in model training. The addition of a pretrained language model metric alongside traditional metrics shows that both produce results which can be used to evaluate neural source code summarisation.

Item Type:
Contribution in Book/Report/Proceedings
Uncontrolled Keywords:
Research Output Funding/no_not_funded
Subjects:
?? no - not funded ??
ID Code:
186158
Deposited By:
Deposited On:
09 Feb 2023 09:25
Refereed?:
Yes
Published?:
Published
Last Modified:
16 Jul 2024 05:17