Evaluating Source Code Quality with Large Language Models: a comparative study
Resumo
Code quality is an attribute composed of various metrics, such as complexity, readability, testability, interoperability, reusability, and the use of good or bad practices, among others. Static code analysis tools aim to measure a set of attributes to assess code quality. However, some quality attributes can only be measured by humans in code review activities, readability being an example. Given their natural language text processing capability, we hypothesize that a Large Language Model (LLM) could evaluate the quality of code, including attributes currently not automatable. This paper aims to describe and analyze the results obtained using Large Language Model (LLM)s as a static analysis tool, evaluating the overall quality of code. We compared the Large Language Model (LLM) with the results obtained with the SonarQube software and its Maintainability metric for two Open Source Software (OSS) Java projects, one with Maintainability Rating A and the other B. A total of 1,641 classes were analyzed, comparing the results in two versions of models: GPT 3.5 Turbo and GPT 4o. We demonstrated that the GPT 3.5 Turbo Large Language Model (LLM) has the ability to evaluate code quality, showing a correlation with Sonar’s metrics. However, there are specific aspects that differ in what the Large Language Model (LLM) measures compared to SonarQube. The GPT 4o version did not present the same results, diverging from the previous model and Sonar by assigning a high classification to codes that were assessed as lower quality. This study demonstrates the potential of Large Language Model (LLM)s in evaluating code quality. However, further research is necessary to investigate limitations such as LLM’s cost, variability of outputs and explore quality characteristics not measured by traditional static analysis tools.
Palavras-chave:
Code Quality, Code Readability, Static Analysis, Software Engineering, LLM, ChatGPT
Publicado
05/11/2024
Como Citar
SIMÕES, Igor Regis da Silva; VENSON, Elaine.
Evaluating Source Code Quality with Large Language Models: a comparative study. In: SIMPÓSIO BRASILEIRO DE QUALIDADE DE SOFTWARE (SBQS), 23. , 2024, Bahia/BA.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 103–113.