AI-based Question Answering Assistance for Analyzing Natural-language Requirements

Ezzini, Saad and Abualhaija, Sallam and Arora, Chetan and Sabetzadeh, Mehrdad (2023) AI-based Question Answering Assistance for Analyzing Natural-language Requirements. In: 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE) :. Institute of Electrical and Electronics Engineers (IEEE), pp. 1277-1289. ISBN 9781665457026

Full text not available from this repository.

Abstract

By virtue of being prevalently written in natural language (NL), requirements are prone to various defects, e.g., inconsistency and incompleteness. As such, requirements are frequently subject to quality assurance processes. These processes, when carried out entirely manually, are tedious and may further overlook important quality issues due to time and budget pressures. In this paper, we propose QAssist - a question-answering (QA) approach that provides automated assistance to stakeholders, including requirements engineers, during the analysis of NL requirements. Posing a question and getting an instant answer is beneficial in various quality-assurance scenarios, e.g., incompleteness detection. Answering requirements-related questions automatically is challenging since the scope of the search for answers can go beyond the given requirements specification. To that end, QAssist provides support for mining external domain-knowledge resources. Our work is one of the first initiatives to bring together QA and external domain knowledge for addressing requirements engineering challenges. We evaluate QAssist on a dataset covering three application domains and containing a total of 387 question-answer pairs. We experiment with state-of-the-art QA methods, based primarily on recent large-scale language models. In our empirical study, QAssist localizes the answer to a question to three passages within the requirements specification and within the external domain-knowledge resource with an average recall of 90.1% and 96.5%, respectively. QAssist extracts the actual answer to the posed question with an average accuracy of 84.2%.

Item Type:
Contribution in Book/Report/Proceedings
ID Code:
210065
Deposited By:
Deposited On:
07 Dec 2023 11:35
Refereed?:
Yes
Published?:
Published
Last Modified:
31 Jan 2024 00:46