Evaluating Large Language Models in Detecting Test Smells

Resumo


Test smells are coding issues that typically arise from inadequate practices, a lack of knowledge about effective testing, or deadline pressures to complete projects. The presence of test smells can negatively impact the maintainability and reliability of software. While there are tools that use advanced static analysis or machine learning techniques to detect test smells, these tools often require effort to be used. This study aims to evaluate the capability of Large Language Models (LLMs) in automatically detecting test smells. We evaluated ChatGPT-4, Mistral Large, and Gemini Advanced using 30 types of test smells across codebases in seven different programming languages collected from the literature. ChatGPT-4 identified 21 types of test smells. Gemini Advanced identified 17 types, while Mistral Large detected 15 types of test smells. The LLMs demonstrated potential as a valuable tool in identifying test smells.

Palavras-chave: Test Smells, Large Language Models (LLMs), Test Codes

Referências

2024. The Open Catalog of Test Smells. [link].

Wajdi Aljedaani, Anthony Peruma, Ahmed Aljohani, Mazen Alotaibi, Mohamed Wiem Mkaouer, Ali Ouni, Christian D. Newman, Abdullatif Ghallab, and Stephanie Ludi. 2021. Test Smell Detection Tools: A Systematic Mapping Study. In International Conference on Evaluation and Assessment in Software Engineering. 170–180. DOI: 10.1145/3463274.3463335

Gabriele Bavota, Abdallah Qusef, Rocco Oliveto, Andrea De Lucia, and Dave Binkley. 2015. Are test smells really harmful? An empirical study. Empirical Software Engineering 20, 4 (2015), 1052–1094. DOI: 10.1007/s10664-014-9313-0

K. Beck. 2003. Test-driven development: by example. Addison-Wesley Professional.

Denivan Campos, Larissa Rocha, and Ivan Machado. 2021. Developers perception on the severity of test smells: an empirical study. arXiv preprint arXiv:2107.13902 (2021).

Tsong Yueh Chen, Fei-Ching Kuo, Huai Liu, Pak-Lok Poon, Dave Towey, T. H. Tse, and Zhi Quan Zhou. 2018. Metamorphic Testing: A Review of Challenges and Opportunities. Computing Surveys 51, 1 (2018), 4:1–4:27.

DAIR.AI. 2024. Prompt Engineering Guide. [link].

Jonas De Bleser, Dario Di Nucci, and Coen De Roover. 2019. Assessing Diffusion and Perception of Test Smells in Scala Projects. In International Conference on Mining Software Repositories. 457–467. DOI: 10.1109/MSR.2019.00072

Benedikt Hauptmann, Maximilian Junker, Sebastian Eder, Lars Heinemann, Rudolf Vaas, and Peter Braun. 2013. Hunting for smells in natural language tests. In International Conference on Software Engineering. IEEE Computer Society, 1217–1220.

Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Wang, Li Li, Xiapu Luo, David Lo, John Grundy, and Haoyu Wang. 2024. Large Language Models for Software Engineering: A Systematic Literature Review. arXiv:2308.10620 [cs.SE]

Manoel Aranda III, Naelson Oliveira, Elvys Soares, Márcio Ribeiro, Davi Romão, Ullyanne Patriota, Rohit Gheyi, Emerson Souza, and Ivan Machado. 2024. A Catalog of Transformations to Remove Smells From Natural Language Tests. In International Conference on Evaluation and Assessment in Software Engineering. ACM, 7–16.

Nildo Silva Junior, Luana Martins, Larissa Rocha, Heitor Costa, and Ivan Machado. 2021. How are test smells treated in the wild? A tale of two empirical studies. Journal of Software Engineering Research and Development 9 (2021), 9–1.

Nildo Silva Junior, Larissa Rocha, Luana Almeida Martins, and Ivan Machado. 2020. A survey on test practitioners’ awareness of test smells. In Iberoamerican Conference on Software Engineering. Curran Associates, 462–475.

Dong Jae Kim, Tse-Hsun Chen, and Jinqiu Yang. 2021. The secret life of test smells-an empirical study on test smell evolution and maintenance. Empirical Software Engineering 26 (2021), 1–47.

Stefano Lambiase, Andrea Cupito, Fabiano Pecorelli, Andrea De Lucia, and Fabio Palomba. 2020. Just-In-Time Test Smell Detection and Refactoring: The DARTS Project. In International Conference on Program Comprehension. 441–445. DOI: 10.1145/3387904.3389296

Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Computing Surveys 55, 9 (2023), 1–35.

Keila Lucas, Rohit Gheyi, Elvys Soares, Márcio Ribeiro, and Ivan Machado. 2024. Evaluating Large Language Models in Detecting Test Smells (artifacts). [link].

Matias Martinez, Anne Etien, Stéphane Ducasse, and Christopher Fuhrman. 2020. RTj: A Java Framework for Detecting and Refactoring Rotten Green Test Cases. In International Conference on Software Engineering: Companion Proceedings. 69–72. DOI: 10.1145/3377812.3382151

G. Meszaros. 2007. xUnit test patterns: Refactoring test code. Pearson Education.

Fabio Palomba and Andy Zaidman. 2020. Retraction Note: Retraction note to: The smell of fear: on the relation between test smells and flaky tests. Empirical Software Engineering 25, 4 (2020), 3041.

Annibale Panichella, Sebastiano Panichella, Gordon Fraser, Anand Ashok Sawant, and Vincent J Hellendoorn. 2022. Test smells 20 years later: detectability, validity, and reliability. Empirical Software Engineering 27, 7 (2022), 170.

Anthony Peruma, Khalid Almalki, Christian D. Newman, Mohamed Wiem Mkaouer, Ali Ouni, and Fabio Palomba. 2019. On the Distribution of Test Smells in Open Source Android Applications: An Exploratory Study. In International Conference on Computer Science and Software Engineering. 193–202.

Valeria Pontillo, Dario Amoroso d’Aragona, Fabiano Pecorelli, Dario Di Nucci, Filomena Ferrucci, and Fabio Palomba. 2024. Machine learning-based test smell detection. Empirical Software Engineering 29, 2 (2024), 1–44.

Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners.

June Sallou, Thomas Durieux, and Annibale Panichella. 2024. Breaking the Silence: the Threats of Using LLMs in Software Engineering. In International Conference on Software Engineering - New Ideas and Emerging Results. ACM/IEEE.

Railana Santana, Luana Martins, Larissa Rocha, Tássio Virgínio, Adriana Cruz, Heitor Costa, and Ivan Machado. 2020. RAIDE: A Tool for Assertion Roulette and Duplicate Assert Identification and Refactoring. In 34th Brazilian Symposium on Software Engineering (SBES). 374–379. DOI: 10.1145/3422392.3422510

Railana Santana, Luana Martins, Tássio Virgínio, Larissa Rocha, Heitor Costa, and Ivan Machado. 2024. An empirical evaluation of RAIDE: A semi-automated approach for test smells detection and refactoring. Science of Computer Programming 231 (Jan. 2024), 103013. DOI: 10.1016/j.scico.2023.103013

Elvys Soares, Manoel Aranda III, Naelson Oliveira, Márcio Ribeiro, Rohit Gheyi, Emerson Souza, Ivan Machado, André L. M. Santos, Baldoino Fonseca, and Rodrigo Bonifácio. 2023. Manual Tests Do Smell! Cataloging and Identifying Natural Language Test Smells. In International Symposium on Empirical Software Engineering and Measurement. IEEE, 1–11.

Elvys Soares, Márcio Ribeiro, Guilherme Amaral, Rohit Gheyi, Leo Fernandes, Alessandro Garcia, Baldoino Fonseca, and André Santos. 2020. Refactoring Test Smells: A Perspective from Open-Source Developers. In Brazilian Symposium on Systematic and Automated Software Testing. 50–59. DOI: 10.1145/3425174.3425212

Elvys Soares, Márcio Ribeiro, Rohit Gheyi, Guilherme Amaral, and André L. M. Santos. 2023. Refactoring Test Smells With JUnit 5: Why Should Developers Keep Up-to-Date? IEEE Transactions on Software Engineering 49, 3 (2023), 1152–1170.

Davide Spadini, Fabio Palomba, Andy Zaidman, Magiel Bruntink, and Alberto Bacchelli. 2018. On the relation of test smells to software code quality. In International conference on software maintenance and evolution. IEEE, 1–12.

Davide Spadini, Martin Schvarcbacher, Ana-Maria Oprescu, Magiel Bruntink, and Alberto Bacchelli. 2020. Investigating Severity Thresholds for Test Smells. In International Conference on Mining Software Repositories (MSR). 311–321. DOI: 10.1145/3379597.3387453

Arie van Deursen, Leon Moonen, Alex van Den Bergh, and Gerard Kok. 2001. Refactoring test code. In International conference on extreme programming and flexible processes in software engineering. 92–95.

Bart Van Rompaey, Bart Du Bois, Serge Demeyer, and Matthias Rieger. 2007. On The Detection of Test Smells: A Metrics-Based Approach for General Fixture and Eager Test. IEEE Transactions on Software Engineering 33, 12 (2007), 800–817. DOI: 10.1109/TSE.2007.70745

Junjie Wang, Yuchao Huang, Chunyang Chen, Zhe Liu, Song Wang, and Qing Wang. 2024. Software testing with large language models: Survey, landscape, and vision. IEEE Transactions on Software Engineering 50 (2024), 911–936.

Yanming Yang, Xing Hu, Xin Xia, and Xiaohu Yang. 2024. The Lost World: Characterizing and Detecting Undiscovered Test Smells. ACM Transactions on Software Engineering and Methodology 33, 3 (2024). DOI: 10.1145/3631973
Publicado
30/09/2024
LUCAS, Keila; GHEYI, Rohit; SOARES, Elvys; RIBEIRO, Márcio; MACHADO, Ivan. Evaluating Large Language Models in Detecting Test Smells. In: SIMPÓSIO BRASILEIRO DE ENGENHARIA DE SOFTWARE (SBES), 38. , 2024, Curitiba/PR. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 669-675. DOI: https://doi.org/10.5753/sbes.2024.3642.