Readability and Understandability Scores for Snippet Assessment: an Exploratory Study

  • Carlos Eduardo C. Dantas UFU
  • Marcelo A. Maia UFU

Resumo


Code search engines usually use readability feature to rank code snippets. There are several metrics to calculate this feature, but developers may have different perceptions about readability. Correlation between readability and understandability features has already been proposed, i.e., developers need to read and comprehend the code snippet syntax, but also understand the semantics. This work investigate scores for understandability and readability features, under the perspective of the possible subjective perception of code snippet comprehension. We find that code snippets with higher readability score has better comprehension than lower ones. The understandability score presents better comprehension in specific situations, e.g. nested loops or if-else chains. The developers also mentioned writability aspects as the principal characteristic to evaluate code snippets comprehension. These results provide insights for future works in code comprehension score optimization.

Palavras-chave: readability, understandability, code snippets, likert, code comprehension

Referências

Marvin Muñoz Barón, Marvin Wyrich, and Stefan Wagner. 2020. An Empirical Validation of Cognitive Complexity as a Measure of Source Code Understandability. CoRR abs/2007.12520 (2020). arXiv:2007.12520

B. W. Boehm, J. R. Brown, and M. Lipow. 1976. Quantitative Evaluation of Software Quality. In Proceedings of the 2nd International Conference on Software Engineering (San Francisco, California, USA) (ICSE ’76). IEEE Computer Society Press, Washington, DC, USA, 592–605.

Raymond P.L. Buse and Westley R. Weimer. 2010. Learning a Metric for Code Readability. IEEE Transactions on Software Engineering 36, 4 (2010), 546–558.

G. Ann Campbell. 2018. Cognitive Complexity - An Overview and Evaluation. In 2018 IEEE/ACM International Conference on Technical Debt (TechDebt). 57–58.

Rodrigo Fernandes Gomes da Silva, Chanchal K. Roy, Mohammad Masudur Rahman, Kevin A. Schneider, Klérisson V. R. Paixão, Carlos Eduardo de Carvalho Dantas, and Marcelo de Almeida Maia. 2020. CROKAGE: effective solution recommendation for programming tasks by leveraging crowd knowledge. Empir. Softw. Eng. 25, 6 (2020), 4707–4758.

Carlos Eduardo C. Dantas and Marcelo A. Maia. 2021. Readability and Understandability Scores for Snippet Assessment: an Exploratory Study. https://doi.org/10.5281/zenodo.5224346

Andre Hora. 2021. Googling for Software Development: What Developers Search For and What They Find.

André C. Hora. 2021. APISonar: Mining API usage examples. Software: Practice and Experience 51 (2021), 319 – 352.

Iman Keivanloo, Juergen Rilling, and Ying Zou. 2014. Spotting Working Code Examples. In Proceedings of the 36th International Conference on Software Engineering (Hyderabad, India) (ICSE 2014). Association for Computing Machinery, New York, NY, USA, 664–675.

K. Krippendorff. 2011. Computing Krippendorff’s Alpha-Reliability.

Jin-cherng Lin and Kuo-chiang Wu. 2006. A Model for Measuring Software Understandability. In The Sixth IEEE International Conference on Computer and Information Technology (CIT’06). 192–192.

Jin-Cherng Lin and Kuo-Chiang Wu. 2008. Evaluation of software understandability based on fuzzy matrix. In 2008 IEEE International Conference on Fuzzy Systems (IEEE World Congress on Computational Intelligence). 887–892.

T.J. McCabe. 1976. A Complexity Measure. IEEE Transactions on Software Engineering SE-2, 4 (1976), 308–320.

Roberto Minelli, Andrea Mocci and, and Michele Lanza. 2015. I Know What You Did Last Summer: An Investigation of How Developers Spend Their Time (ICPC ’15). IEEE Press, 25–35.

Laura Moreno, Gabriele Bavota, Massimiliano Di Penta, Rocco Oliveto, and Andrian Marcus. 2015. How Can I Use This Method?. In Proceedings of the 37th International Conference on Software Engineering - Volume 1 (Florence, Italy) (ICSE ’15). IEEE Press, 880–890.

Delano Oliveira, Reydne Bruno, Fernanda Madeiral, and Fernando Castor. 2020. Evaluating Code Readability and Legibility: An Examination of Human-centric Studies. In 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME). 348–359.

Nick Papadakis, Ayan Patel, Tanay Gottigundala, Alexandra Garro, Xavier Graham, and Bruno da Silva. 2020. Why Did Your PR Get Rejected? Defining Guidelines for Avoiding PR Rejection in Open Source Projects. In Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops (Seoul, Republic of Korea) (ICSEW’20). Association for Computing Machinery, New York, NY, USA, 165–168.

Valentina Piantadosi, Fabiana Fierro, Simone Scalabrino, Alexander Serebrenik, and Rocco Oliveto. 2020. How does code readability change during software evolution? Empirical Software Engineering 25 (11 2020), 1–39.

Daryl Posnett, Abram Hindle, and Premkumar Devanbu. 2011. A simpler model of software readability. Proceedings - International Conference on Software Engineering, 73–82.

Simone Scalabrino, Gabriele Bavota, Christopher Vendome, Mario Linares- Vásquez, Denys Poshyvanyk, and Rocco Oliveto. 2017. Automatically assessing code understandability: How far are we?. In 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). 417–427.

Simone Scalabrino, Mario Linares-Vásquez, Rocco Oliveto, and Denys Poshyvanyk. 2018. A comprehensive model for code readability. Journal of Software: Evolution and Process 30 (06 2018).

Mir Muhammd Suleman Sarwar, Sara Shahzad, and Ibrar Ahmad. 2013. Cyclomatic complexity: The nesting problem. In Eighth International Conference on Digital Information Management (ICDIM 2013). 274–279.

M. Wyrich, A. Preikschat, D. Graziotin, and S. Wagner. 2021. The Mind Is a Powerful Place: How Showing Code Comprehensibility Metrics Influences Code Understanding. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE Computer Society, Los Alamitos, CA, USA, 512–523.

Xin Xia, Lingfeng Bao, David Lo, Pavneet Singh Kochhar, Ahmed E. Hassan, and Zhenchang Xing. 2017. What Do Developers Search for on the Web? 22, 6 (Dec. 2017), 3149–3185.
Publicado
27/09/2021
Como Citar

Selecione um Formato
DANTAS, Carlos Eduardo C.; MAIA, Marcelo A.. Readability and Understandability Scores for Snippet Assessment: an Exploratory Study. In: WORKSHOP DE VISUALIZAÇÃO, EVOLUÇÃO E MANUTENÇÃO DE SOFTWARE (VEM), 9. , 2021, Joinville. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 46-50. DOI: https://doi.org/10.5753/vem.2021.17217.