An Experimental Study Evaluating Cost, Adequacy, and Effectiveness of Pynguin's Test Sets

  • Lucca Guerino UFSCar
  • Auri Vincenzi UFSCar

Resumo


Context: Software testing is a very relevant step in quality assurance, but developers frequently overlook it. We pursued testing automation to minimize the impact of missing test cases in a software project. Problem: However, for Python programs, there are not many tools able to fully automate the generation of unit test sets, and the one available demands studies to provide evidence of the quality of the generated test set. Solution: This work aims to evaluate the quality of different unit test generation algorithms for Python, implemented in a tool named Pynguin. Method: In the analysis of the selected programs, the Pynguin test generation tool is executed with each of its algorithms, including random, as a way to generate complete unit test sets. Then, we evaluate each generated test set’s efficacy, efficiency, and cost. We use four different fault models, implemented by four mutation testing tools, to measure efficacy. We use line and branch coverage to measure efficiency, the number of test cases, and test set execution time to measure cost. Summary of Results: We identified that RANDOM test set performed worst concerning all evaluated aspects, DYNAMOSA and MOSA, the two algorithms that generate the best test sets regarding efficacy, efficiency, and cost. By combining all Pynguin smart algorithms (DYNAMOSA, MIO, MOSA, WHOLE-SUITE), the resultant test set overcomes the individual test sets efficiency by around 1%, for coverage and efficacy by 4.5% on average, concerning previous mutation score, at a reasonable cost, without a test set minimization.

Palavras-chave: automated test generation, coverage testing, experimental software engineering, mutation testing, software testing, testing tools
Publicado
25/09/2023
Como Citar

Selecione um Formato
GUERINO, Lucca; VINCENZI, Auri. An Experimental Study Evaluating Cost, Adequacy, and Effectiveness of Pynguin's Test Sets. In: SIMPÓSIO BRASILEIRO DE TESTES DE SOFTWARE SISTEMÁTICO E AUTOMATIZADO (SAST), 8. , 2023, Campo Grande/MS. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 5–14.