Lightweight Assessment of Test-Case Effectiveness using Source-Code-Quality Indicators
Test cases are crucial to help developers preventing the introduction of software faults. Unfortunately, not all the tests are properly designed or can effectively capture faults in production code. Some measures have been defined to assess test-case effectiveness: the most relevant one is the mutation score, which highlights the quality of a test by generating the so-called mutants, i.e., variations of the production code that make it faulty and that the test is supposed to identify. However, previous studies revealed that mutation analysis is extremely costly and hard to use in practice. The approaches proposed by researchers so far have not been able to provide practical gains in terms of mutation testing efficiency. Indeed, to apply mutation testing in practice a developer needs to (i) generate mutants, (ii) compile the source code, and (iii) run test cases. In cases of systems where test suites count hundrends of test cases, this is unfeasible. As a consequence, the problem of automatically assessing test-case effectiveness in a timely and efficient manner is still far from being solved. In this paper, we present a novel methodology orthogonal to existing approaches. In particular, we investigate the feasibility to estimate test-case effectiveness, as indicated by mutation score, exploiting production and test-code-quality indicators. We firstly select a set of 67 factors, including test and code metrics as well as test- and code-smells, and study their relation with test-case effectiveness. We discover that 41 out of 67 of the investigated factors statistically differs from non-effective and effective tests. Then, we devise a mutation score estimation model exploiting such factors and investigate its performance as well as its most relevant features. The key results of the study reveal that our estimation model only based only on statically computable features has 86% of both F-Measure and AUC-ROC. This means that we can estimate the test-case effectiveness, using source-code-quality indicators, with high accuracy and without executing the tests. In fact, adding line coverage as an additional feature only increases the performance of the model by about the 9%. As a consequence, we can provide a practical approach that is beyond the typical limitations of current mutation testing techniques.
Tue 12 NovDisplayed time zone: Tijuana, Baja California change
16:00 - 17:40 | Testing and VisualizationDemonstrations / Research Papers / Journal First Presentations at Cortez 1 Chair(s): Amin Alipour University of Houston | ||
16:00 20mTalk | History-Guided Configuration Diversification for Compiler Test-Program GenerationACM SIGSOFT Distinguished Paper Award Research Papers Junjie Chen Tianjin University, Guancheng Wang Peking University, Dan Hao Peking University, Yingfei Xiong Peking University, Hongyu Zhang The University of Newcastle, Lu Zhang Peking University | ||
16:20 20mTalk | Data-Driven Compiler Testing and Debugging Research Papers Junjie Chen Tianjin University | ||
16:40 20mTalk | Targeted Example Generation for Compilation Errors Research Papers Umair Z. Ahmed National University of Singapore, Renuka Sindhgatta Queensland University of Technology, Australia, Nisheeth Srivastava Indian Institute of Technology, Kanpur, Amey Karkare IIT Kanpur Link to publication Pre-print | ||
17:00 20mTalk | Lightweight Assessment of Test-Case Effectiveness using Source-Code-Quality Indicators Journal First Presentations Giovanni Grano University of Zurich, Fabio Palomba Department of Informatics, University of Zurich, Harald Gall University of Zurich Link to publication Pre-print | ||
17:20 10mDemonstration | Visual Analytics for Concurrent Java Executions Demonstrations Cyrille Artho KTH Royal Institute of Technology, Sweden, Monali Pande KTH Royal Institute of Technology, Qiyi Tang University of Oxford | ||
17:30 10mDemonstration | NeuralVis: Visualizing and Interpreting Deep Learning Models Demonstrations Xufan Zhang State Key Laboratory for Novel Software Technology Nanjing University, Nanjing, China, Ziyue Yin State Key Laboratory for Novel Software Technology Nanjing University, Nanjing, China, Yang Feng University of California, Irvine, Qingkai Shi Hong Kong University of Science and Technology, Jia Liu State Key Laboratory for Novel Software Technology Nanjing University, Nanjing, China, Zhenyu Chen Nanjing University |