Code Autocomplete Using Transformers

Resumo


In software development, code autocomplete can be an essential tool in order to accelerate coding. However, many of these tools built into the IDEs are limited to suggesting only methods or arguments, often presenting to the user long lists of irrelevant items. Since innovations introduced by transformer-based models that have reached the state of the art performance in tasks involving natural language processing (NLP), the application of these models also in tasks involving code intelligence, such as code completion, has become a frequent object of study in recent years. In these paper, we present a transformer-based model trained on 1.2 million Java files gathered from top-starred Github repositories. Our evaluation approach was based on measuring the model’s ability to predict the completion of a line, proposing a new metric to measure the applicability of the suggestions that we consider better adapted to the practical reality of the code completion task. With a recently developed Java web project as test set, our experiments showed that in 55.9% of the test cases the model brought at least one suggestion applicable, while the best baseline model presented this in 26.5%.
Palavras-chave: Code completion, Deep learning
Publicado
29/11/2021
Como Citar

Selecione um Formato
MEYRER, Gabriel T.; ARAÚJO, Denis A.; RIGO, Sandro J.. Code Autocomplete Using Transformers. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 10. , 2021, Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . ISSN 2643-6264.