Using predictive models to evaluate the quality of a test suite at class and method level.

  • Keslley Lima Silva Universidade Federal do Rio Grande do Sul
  • Érika Cota Universidade Federal do Rio Grande do Sul

Resumo


Testing is an indispensable part of the software development process and is a continuous process during the development life cycle. In this context, examining the behavior of software systems to reveal potential problems is a crucial task. To this end, the test suites usually are utilized to examine the software quality. However, test suite quality control is hard for the tester, especially in an evolving system. Such control is needed to assure and improve the test suite's quality and the application as a consequence. Currently, test coverage criteria are used as a mechanism to assist the tester in analyzing the test suite (e.g., find the weaknesses, and add a new test case or test inputs). However, more strong coverage criteria (potentially showing less glaring weaknesses) are challenging to assess. In this work, we propose a different approach to support the developer in evaluating the test suite quality based on more powerful test coverage criteria. We will follow the Knowledge Discovery in Database process using machine learning algorithms to estimate the prime path coverage at the method and class level. For this purpose, we will create two large datasets consisting of source code metrics and test case metrics from 12 open-source Java projects, and these datasets will be used in the training process to build the predictive models. Using the built models, we expected to predict the prime path coverage at the method and class level with a reliable prediction performance.
Palavras-chave: Software testing, Coverage prediction, Code coverage criteria

Referências

Ammann, P. and Offutt, J. (2016). Introduction to Software Testing. Cambridge University Press, 2 edition.

Durelli, V. H., Delamaro, M. E., and Offutt, J. (2018). An experimental comparison of edge, edge-pair, and prime path criteria. Science of Computer Programming, 152:99–115.

Durelli, V. H. S., Durelli, R. S., Borges, S. S., Endo, A. T., Eler, M. M., Dias, D. R. C., and Guimaraes, M. P. (2019). Machine learning applied to software testing: A systematic mapping study. IEEE Transactions on Reliability, 68(3):1189–1212.

Grano, G., Titov, T. V., Panichella, S., and Gall, H. C. (2019). Branch coverage prediction in automated testing. Journal of Software: Evolution and Process.

Jalbert, K. and Bradbury, J. S. (2012). Predicting mutation score using source code and test suite metrics. In Proceedings of the First International Workshop on Realizing AI Synergies in Software Engineering, RAISE ’12, pages 42–46, Piscataway, NJ, USA. IEEE Press.

Jorgensen, P. C. (2013). Software Testing: A Craftsman’s Approach, Fourth Edition. Auerbach Publications, fourth edition.

Kirk, M. (2014). Thoughtful machine learning: A test-driven approach. ” O’Reilly Media, Inc.”.

Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proceedings of the 14th International Joint Conference on Artificial Intelligence - Volume 2, IJCAI’95, pages 1137–1143, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.

Li, N., Praphamontripong, U., and Offutt, J. (2009). An experimental comparison of four unit test criteria: Mutation, edge-pair, all-uses and prime path coverage. In 2009 International Conference on Software Testing, Verification, and Validation Workshops, pages 220–229.

Tan, P.-N., Steinbach, M., and Kumar, V. (2005). Introduction to Data Mining, (First Edition). Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA.

Varela, A., Perez-Gonzalez, H., Martinez, F., and Soubervielle-Montalvo, C. (2017). Source code metrics: A systematic mapping study. Journal of Systems and Software,128.

Virgínio, T., Santana, R., Martins, L. A., Soares, L. R., Costa, H., and Machado, I. (2019). On the influence of test smells on test coverage. In Proceedings of the XXXIII Brazilian Symposium on Software Engineering, SBES 2019, page 467–471, New York, NY, USA. Association for Computing Machinery.

Zhang, J., Zhang, L., Harman, M., Hao, D., Jia, Y., and Zhang, L. (2019). Predictive mutation testing. IEEE Transactions on Software Engineering, 45(9):898–918.
Publicado
19/10/2020
Como Citar

Selecione um Formato
SILVA, Keslley Lima; COTA, Érika. Using predictive models to evaluate the quality of a test suite at class and method level.. In: WORKSHOP DE TESES E DISSERTAÇÕES (WTDSOFT) - CONGRESSO BRASILEIRO DE SOFTWARE: TEORIA E PRÁTICA (CBSOFT), 11. , 2020, Evento Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2020 . p. 84-90. DOI: https://doi.org/10.5753/cbsoft_estendido.2020.14613.