How Effectively Is Defective Code Actually Tested? : An Analysis of JUnit Tests in Seven Open Source Systems

Petric, Jean and Hall, Tracy and Bowes, David (2018) How Effectively Is Defective Code Actually Tested? : An Analysis of JUnit Tests in Seven Open Source Systems. In: Proceedings of the 14th International Conference on Predictive Models and Data Analytics in Software Engineering :. PROMISE'18 . ACM, New York, NY, USA, pp. 42-51. ISBN 9781450365932

[thumbnail of testing_effectiveness_paper]
Preview
PDF (testing_effectiveness_paper)
testing_effectiveness_paper.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial.

Download (716kB)

Abstract

Background: Newspaper headlines still regularly report latent software defects. Such defects have often evaded testing for many years. It remains difficult to identify how well a system has been tested. It also remains difficult to assess how successful at finding defects particular tests are. Coverage and mutation testing are frequently used to asses test effectiveness. We look more deeply at the performance of commonly used JUnit testing by assessing how much JUnit testing was done and how effective that testing was at detecting defects in seven open source systems. Aim: We aim to identify whether defective code has been effectively tested by JUnit tests as non-defective code. We also aim to identify the characteristics of JUnit tests that are related to identifying defects. Methodology: We first extract the defects from seven open source projects using the SZZ algorithm. We match those defects with JUnit tests to identify the proportion of defects that were covered by JUnit tests. We also do the same for non-defective code. We then use Principal Component Analysis and machine learning to investigate the characteristics of JUnit tests that were successful in identifying defects. Results: Our findings suggest that most of the open source systems we investigated are under-tested. On average over 66% of defective methods were not linked to any JUnit tests. We show that the number of methods touched by a JUnit test is strongly related to that test uncovering a defect. Conclusion: More JUnit tests need to be produced for the seven open source systems that we investigate. JUnit tests need to be relatively sophisticated, in particular they should touch more than just one method during the test.

Item Type:
Contribution in Book/Report/Proceedings
Additional Information:
© © 2018 Copyright held by the owner/author(s). Publication rights licensed to Association for Computing Machinery. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in PROMISE'18 Proceedings of the 14th International Conference on Predictive Models and Data Analytics in Software Engineeringhttp://dx.doi.org/10.1145/3273934.3273939
Subjects:
?? junit tests, software testing, test effectiveness ??
ID Code:
128750
Deposited By:
Deposited On:
02 Nov 2018 10:34
Refereed?:
Yes
Published?:
Published
Last Modified:
14 Oct 2024 00:39