skip to main content
10.1145/3571473.3571478acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbqsConference Proceedingsconference-collections
research-article

Automatic Refactoring Method to Remove Eager Test Smell

Published:27 January 2023Publication History

ABSTRACT

Unit tests are artifacts generated during the development process to identify software errors early. Unit tests can be affected by Test Smells (TS), which are defined as poorly designed tests due to bad programming practices implemented in unit test code. Problems caused by TS negatively impact the efficiency of developers and the understanding and maintenance of unit tests. Reducing the problems caused by TS can occur through refactoring activities. Despite the existence of studies that seek to identify the occurrence of TS in unit tests and proposals for semi-automatic refactoring, there are no studies related to the automatic removal of TS from the test repository. In this context, this study presents a method for automatic TS removal called Eager Test, which occurs when a test verifies more than one method of the code. The proposed method was evaluated through experiments, comparing the original version of a unit tests repository with its automatically refactored version. The results are promising, showing an Eager Test removal rate of 99,4% of unit tests in the repository without causing test errors or test fails. However, there was an increase in test execution time and of lines of code, resulting from the quantity of tests that were extracted. The proposed method supports improving the quality of unit tests and reduces the effort required by developers to remove Eager Test.

References

  1. Grano, Giovanni. 2019. A New Dimension of Test Quality: Assessing and Generating Higher Quality Unit Test Cases. In: Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA ’19), July 15–19, 2019, Beijing, China. ACM, New York, NY, USA, 5 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Tufano, Michele; Palomba, Fabio; Bavota, Gabriele; Di Penta, Massimiliano; Oliveto, Rocco; De Lucia, Andrea; Poshyvanyk, Denys. An Empirical Investigation into the Nature of Test Smells. In: Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering (ASE 2016). Association for Computing Machinery, New York, NY, USA, 4–15. https://doi.org/10.1145/2970276.2970340Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Yamashita, Aiko. Assessing the capability of code smells to explain maintenance problems: an empirical study combining quantitative and qualitative data. Empirical Software Engineering (2014) 19:1111–1143.Google ScholarGoogle Scholar
  4. Sousa, Leonardo da Silva. Spotting Design Problems with Smell Agglomerations. In: IEEE/ACM 38th IEEE International Conference on Software Engineering Companion, 2016.Google ScholarGoogle Scholar
  5. Van Deursen, A.V., Moonen, L., Bergh, A.V.D., Kok, G., 2001. Refactoring test code. In: Marchesi, M. (Ed.), Proceedings of the 2nd International Conference on Extreme Programming and Flexible Processes (XP2001), University of Cagliari, pp. 92–95.Google ScholarGoogle Scholar
  6. Khomh, Foutse; Vaucher, Stephane; Guéhéneuc, Yann-Gaël; Sahraoui, Houari. BDTEX: A GQM-based Bayesian approach for the detection of antipatterns. The Journal of Systems and Software 84 (2011) 559–572.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Arnaoudova, Venera; Di Penta, Massimiliano; Antonio, Giuliano; Gueheneuc, Yann-Gael. 2013. A New Family of Software Anti-Patterns: Linguistic Anti-Patterns. In: Proceedings of 17th European Conference on Software Maintenance and Reengineering.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Sharma, Tushar; Fragkoulis, Marios; Spinellis, Diomidis. Does Your Configuration Code Smell? 2016. In: IEEE/ACM 13th Working Conference on Mining Software Repositories , Austin, TX, 2016 pp. 189-200.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Garousi, Vahid; Küçük, Barıs. Smells in software test code: A survey of knowledge in industry and academia. The Journal of Systems and Software 138 (2018). pp52–81. https://doi.org/10.1016/j.jss.2017.12.013Google ScholarGoogle ScholarCross RefCross Ref
  10. Spadini, Davide; Schvarcbacher, Martin; Oprescu, AnaMaria; Bruntink, Ana-Maria; Bacchelli, Alberto. 2020. Investigating Severity Thresholds for Test Smells. In: 17th International Conference on Mining Software Repositories (MSR ’20), October 5–6, 2020, Seoul, Republic of Korea. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3379597.3387453Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Bavota, Gabriele; Qusef, Abdallah; Oliveto, Rocco; De Lucia, Andrea; Binkley, Dave. Are test smells harmful? An empirical study. Empirical Software Engineering, pg1052–1094 (2015)Google ScholarGoogle Scholar
  12. Qusef, Abdallah, Elish, Mahmoud O.; Binkley, David. Exploratory Study of the Relationship Between Software Test Smells and Fault Proneness. IEEE Access. vol 7, 2019. https://doi.org/10.1109/ACCESS.2019.2943488Google ScholarGoogle ScholarCross RefCross Ref
  13. Palomba, F.; Zaidman, A. Does Refactoring of Test Smells Induce Fixing Flaky Tests? 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME), 2017, pp. 1-12, https://doi.org/10.1109/ICSME.2017.12Google ScholarGoogle ScholarCross RefCross Ref
  14. Peruma, Anthony; Almalki, Khalid; Newman, Christian D.; Mkaouer, Mohamed Wiem; Ouni, Ali; Palomba, Fabio. 2020. tsDetect: An Open Source Test Smells Detection Tool. In: Proceedings of the 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’20)Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Alomar, Eman Abdullah; Peruma, Anthony; Mkaouer, Mohamed Wiem; Newman, Christian; Ouni, Ali; Kessentini, Marouane. How we refactor and how we document it? On the use of supervised machine learning algorithms to classify refactoring documentation. Expert Systems with Applications, volume 167, 2021, https://doi.org/10.1016/j.eswa.2020.114176Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Habchi, Sarra; Moha, Naouel; Rouvoy, Romain. Android code smells: From introduction to refactoring. The Journal of Systems & Software 177 (2021) 110964. https://doi.org/10.1016/j.jss.2021.110964Google ScholarGoogle ScholarCross RefCross Ref
  17. De Blesser, Jonas; Di Nucci, Dario; De Roover, Coen. Assessing Diffusion and Perception of Test Smells in Scala Projects. 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR), 2019, pp. 457-467Google ScholarGoogle Scholar
  18. Santana, Railana; Martins, Luana; Rocha, Larissa; Virgínio, Tássio; Cruz, Adriana; Costa, Heitor; Machado, Ivan. 2020. RAIDE: a tool for Assertion Roulette and Duplicate Assert identification and refactoring. In Proceedings of the 34th Brazilian Symposium on Software Engineering (SBES '20). Association for Computing Machinery, New York, NY, USA, 374–379.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Daka, Ermira; Fraser, Gordon. A Survey on Unit Testing Practices and Problems. In: IEEE 25th International Symposium on Software Reliability Engineering, 2014, pp. 201-211. https://doi.org/10.1109/ISSRE.2014.11Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Samarthyam G.; Muralidharan M.; Anna R.K. 2017. Understanding Test Debt. In: Mohanty H., Mohanty J., Balakrishnan A. (eds) Trends in Software Testing. Springer, Singapore.Google ScholarGoogle Scholar
  21. Campos, Denivan; Rocha, Larissa; Machado, Ivan. 2021. Developer's perception on the severity of test smells: an empirical study. In: Proceedings of XXIV Congresso Ibero-Americano em Engenharia de Software (CibSE'2021).Google ScholarGoogle Scholar
  22. Palomba, Fabio; Nucci, Dario Di; Panichella, Annibale; Oliveto, Rocco; De Lucia, Andrea. On the Diffusion of Test Smells in Automatically Generated Test Code: An Empirical Study. 2016 IEEE/ACM 9th International Workshop on Search-Based Software Testing (SBST), 2016, pp. 5-14.Google ScholarGoogle Scholar
  23. Tássio Virgínio, Luana Martins, Larissa Rocha, Railana Santana, Adriana Cruz, Heitor Costa, and Ivan Machado. 2020. JNose: Java Test Smell Detector. In Proceedings of the 34th Brazilian Symposium on Software Engineering. 564–569.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Aljedaani, Wajdi; Peruma, Anthony; Aljohani, Ahmed; Alotaibi, Mazen; Mkaouer, Mohamed Wiem; Ouni, Ali; Newman, Christian D.; GHALLAB, Abdullatif; LUDI, Stephanie. 2021. Test Smell Detection Tools: A Systematic Mapping Study. In: Proceedings of Evaluation and Assessment in Software Engineering (EASE 2021). Association for Computing Machinery, New York, NY, USA, 170–180. https://doi.org/10.1145/3463274.3463335Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. International Organization for Standardization/ International Electrotechnical Comission. ISO/IEC 24765 - Systems and software engineering — Vocabulary. Geneve, 2010, 410p.Google ScholarGoogle Scholar
  26. Ammann, Paul; Offutt, Jeff. Introduction to Software Testing. Cambridge University Press; 2nd edition (December 13, 2016) 364 pages. ISBN-10 : 9781107172012Google ScholarGoogle ScholarCross RefCross Ref
  27. Meszaros, Gerard. xUnit Test Patterns: Refactoring Test Code. Addison Wesley, 2007Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Fabio Palomba, Andy Zaidman, and Andrea De Lucia. 2018. Automatic test smell detection using information retrieval techniques. In 2018 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, 311–322.Google ScholarGoogle ScholarCross RefCross Ref
  29. Delplanque, Julien; Ducasse, Stéphane; Polito, Guillermo; Black, Andrew P.; Etien, Anne. Rotten Green Tests. 2019 In:45 IEEE/ACM 41st International Conference on Software Engineering (ICSE), 2019, pp. 500-511,Google ScholarGoogle Scholar
  30. Peruma, Anthony; Almalki, Khalid; Newman, Christian D.; Mkaouer, Mohamed Wiem; Ouni, Ali; Palomba, Fabio. 2019. On the Distribution of Test Smells in Open-Source Android Applications: An Exploratory Study. In: 29th Annual International Conference on Computer Science and Software Engineering, November 4–6, 2019.Google ScholarGoogle Scholar
  31. Wiklund, Kristian; Eldh, Sigrid; Sundmark, Daniel; Lundqvist, Kristina. Technical Debt in Test Automation. In: Proceedings of IEEE Fifth International Conference on Software Testing, Verification and Validation, 2012. https://doi.org/10.1109/ICST.2012.192Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Calikli, Gul; Bener, Ayse. Empirical analysis of factors affecting confirmation bias levels of software engineers. Software Quality Journal 23, 695–722 (2015). https://doi.org/10.1007/s11219-014-9250-6Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Bavota, Gabriele; Qusef, Abdallah; Oliveto, Rocco; De Lucia, Andrea; BINKLEY, David. An empirical analysis of the distribution of unit test smells and their impact on software maintenance. 2012 In: Proceedings of 28th IEEE International Conference on Software Maintenance (ICSM), 2012, pp. 56-65,Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Rompaey, B., Du Bois, B., Demeyer, S., Rieger, M., 2007. On the detection of test smells: A Metrics-Based approach for general fixture and eager test. IEEE Transaction on Software Engineering. 33 (12), pp.800–817.Google ScholarGoogle Scholar
  35. Lambiase, Stefano; Cupito, Andrea; Pecorelli, Fabiano; De Lucia, Andrea; Palomba, Fabio. 2020. Just-In-Time Test Smell Detection and Refactoring: The DARTS Project. In Proceedings of the 28th International Conference on Program Comprehension (ICPC '20). Association for Computing Machinery, New York, NY, USA, 441–445.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Vahabzadeh, Arash; Stocco, Andrea; Mesbah, Ali. 2018. Fine-Grained Test Minimization. In Proceedings of 40th International Conference on Software Engineering (ICSE ’18). ACM, New York, NY, USA, 12 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Posnett, Daryl; Hindle, Abram; Devanbu, Premkumar. 2011. A simpler model of software readability. In: Proceedings of the 8th Working Conference on Mining Software Repositories (MSR '11). Association for Computing Machinery, New York, NY, USA, 73–82. https://doi.org/10.1145/1985441.1985454Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Setiani, Novi; Ferdiana, Ridi; Hartanto, Rudy. Test Case Understandability Model. in IEEE Access, vol. 8, pp. 169036-169046, 2020, https://doi.org/10.1109/ACCESS.2020.3022876Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Automatic Refactoring Method to Remove Eager Test Smell
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format