skip to main content
10.1145/3422392.3422510acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbesConference Proceedingsconference-collections
research-article

RAIDE: a tool for Assertion Roulette and Duplicate Assert identification and refactoring

Published:21 December 2020Publication History

ABSTRACT

Test smells are fragments of code that can affect the comprehensibility and the maintainability of the test code. Preventing, detecting, and correcting test smells are tasks that may require a lot of effort, and might not scale to large-sized projects when carried out manually. Currently, there are many tools available to support test smells detection. However, they usually do not provide neither a user-friendly interface nor automated support for refactoring the test code to remove test smells. In this work, we propose RAIDE, an open-source and IDE-integrated tool. RAIDE assists testers with an environment for automated detection of lines of code affected by test smells, as well as a semi-automated refactoring for Java projects using the JUnit framework.

References

  1. Manuel Breugelmans and Bart Van Rompaey. 2008. Testq: Exploring structural and maintenance characteristics of unit test suites. In WASDeTT-1: 1st International Workshop on Advanced Software Development Tools and Techniques. IEEE International Conference on Software Maintenance, Beijing, China, 4--15.Google ScholarGoogle Scholar
  2. Zheng Fang. 2014. Test clone detection via assertion fingerprints. Master's thesis. University of Waterloo.Google ScholarGoogle Scholar
  3. Marios Fokaefs, Nikolaos Tsantalis, and Alexander Chatzigeorgiou. 2007. Jdeodorant: Identification and removal of feature envy bad smells. In 2007 IEEE International Conference on Software Maintenance. IEEE, Paris, France, 519--520.Google ScholarGoogle ScholarCross RefCross Ref
  4. Vahid Garousi and Barış Küçük. 2018. Smells in software test code: A survey of knowledge in industry and academia. Journal of systems and software 138 (2018), 52--81.Google ScholarGoogle ScholarCross RefCross Ref
  5. Vahid Garousi, Baris Kucuk, and Michael Felderer. 2018. What we know about smells in software test code. IEEE Software 36, 3 (2018), 61--73.Google ScholarGoogle ScholarCross RefCross Ref
  6. M. Greiler, A. van Deursen, and M. Storey. 2013. Automated Detection of Test Fixture Strategies and Smells. In 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation. IEEE, Luxembourg, Luxembourg, 322--331. https://doi.org/10.1109/ICST.2013.45Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Chen Huo and James Clause. 2014. Improving oracle quality by detecting brittle assertions and unused inputs in tests. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, Hong Kong, China, 621--631.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Negar Koochakzadeh and Vahid Garousi. 2010. TeCReVis: A Tool for Test Coverage and Test Redundancy Visualization. In Testing - Practice and Research Techniques, Leonardo Bottaci and Gordon Fraser (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 129--136.Google ScholarGoogle Scholar
  9. Negar Koochakzadeh and Vahid Garousi. 2010. A tester-assisted methodology for test redundancy detection. Advances in Software Engineering 2010 (2010).Google ScholarGoogle Scholar
  10. Fabio Palomba, Dario Di Nucci, Annibale Panichella, Rocco Oliveto, and Andrea De Lucia. 2016. On the diffusion of test smells in automatically generated test code: An empirical study. In Proceedings of the 9th international workshop on search-based software testing. ACM, IEEE, Austin, TX, USA, 5--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Anthony Peruma, Khalid Almalki, Christian D. Newman, Mohamed Wiem Mkaouer, Ali Ouni, and Fabio Palomba. 2019. On the Distribution of Test Smells in Open Source Android Applications: An Exploratory Study. In Proceedings of the 29th Annual International Conference on Computer Science and Software Engineering (CASCON '19). IBM Corp., USA, 193--202.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Anthony Shehan Ayam Peruma. 2018. What the Smell? An Empirical Investigation on the Distribution and Severity of Test Smells in Open Source Android Applications. Ph.D. Thesis. Rochester Institute of Technology, Rochester, New York.Google ScholarGoogle Scholar
  13. Stefan Reichhart, Tudor Gîrba, and Stéphane Ducasse. 2007. Rule-based Assessment of Test Quality. J. Object Technol. 6, 9 (2007), 231--251.Google ScholarGoogle ScholarCross RefCross Ref
  14. Nikolaos Tsantalis and Alexander Chatzigeorgiou. 2009. Identification of move method refactoring opportunities. IEEE Transactions on Software Engineering 35, 3 (2009), 347--367.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Arie Van Deursen, Leon Moonen, Alex Van Den Bergh, and Gerard Kok. 2001. Refactoring test code. In Proceedings of the 2nd international conference on extreme programming and flexible processes in software engineering (XP2001). CWI (Centre for Mathematics and Computer Science), NLD, 92--95.Google ScholarGoogle Scholar
  16. Tássio Virgínio, Railana Santana, Luana Almeida Martins, Larissa Rocha Soares, Heitor Costa, and Ivan Machado. 2019. On the Influence of Test Smells on Test Coverage. In Proceedings of the XXXIII Brazilian Symposium on Software Engineering (SBES 2019). Association for Computing Machinery, New York, NY, USA, 467--471.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Tássio Virgínio, Luana Martins, Larissa Rocha Soares, Santana Railana, Heitor Costa, and Ivan Machado. 2020. An empirical study of automatically-generated tests from the perspective of test smells. In Proceedings of the XXXIV Brazilian Symposium on Software Engineering (SBES 2020). ACM, New York, NY, USA, 5.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Benjamin Zeiß. 2006. A refactoring tool for TTCN-3. Master's thesis, Institute for Informatics, ZFI-BM-2006-05, ISSN (2006), 1612--6793.Google ScholarGoogle Scholar

Index Terms

  1. RAIDE: a tool for Assertion Roulette and Duplicate Assert identification and refactoring

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        SBES '20: Proceedings of the XXXIV Brazilian Symposium on Software Engineering
        October 2020
        901 pages
        ISBN:9781450387538
        DOI:10.1145/3422392

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 21 December 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate147of427submissions,34%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader