Assessing the Readability of ChatGPT Code Snippet Recommendations: A Comparative Study

  • Carlos Dantas UFU
  • Adriano Rocha UFU
  • Marcelo Maia UFU

Resumo


Developers often rely on code search engines to find high-quality and reusable code snippets online, such as those available on Stack Overflow. Recently, ChatGPT, a language model trained for dialog tasks, has been gaining attention as a promising approach for code snippet generation. However, there is still a need for in-depth analysis of the quality of its recommendations. In this work, we propose the evaluation of the readability of code snippets generated by ChatGPT, comparing them with those recommended by CROKAGE, a state-of-the-art code search engine for Stack Overflow. We compare the recommended snippets of both approaches using readability issues raised by the automated static analysis tool (ASAT) SonarQube. Our results show that ChatGPT can generate cleaner code snippets and more consistent naming and code conventions than those written by humans and recommended by CROKAGE. However, in some cases, ChatGPT generates code that lacks recent features from Java API such as try-with-resources, lambdas, and others. Overall, our findings suggest that ChatGPT can provide valuable assistance to developers searching for didactic and high-quality code snippets online. However, it is still important for developers to review the generated code, either manually or assisted by an ASAT, to prevent potential readability issues, as the correctness of the generated code snippets.

Palavras-chave: readability, code snippets, Stack Overflow, SonarQube, ChatGPT
Publicado
25/09/2023
Como Citar

Selecione um Formato
DANTAS, Carlos; ROCHA, Adriano; MAIA, Marcelo. Assessing the Readability of ChatGPT Code Snippet Recommendations: A Comparative Study. In: SIMPÓSIO BRASILEIRO DE ENGENHARIA DE SOFTWARE (SBES), 37. , 2023, Campo Grande/MS. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 283–292.