Juru: Legal Brazilian Large Language Model from Reputable Sources

  • Roseval Malaquias Junior USP / Maritaca AI
  • Ramon Pires Maritaca AI
  • Roseli A. F. Romero USP
  • Rodrigo Nogueira Maritaca AI

Resumo


The high compute cost associated with pretraining large language models limits their research. Two strategies have emerged to address this issue: domain specialization and pretraining with high-quality data. To explore these strategies, we specialized the Mistral-7B model with 1.9 billion unique tokens from reputable Brazilian legal sources and conducted few-shot evaluations on legal and general knowledge test suites. Our model, Juru, demonstrates the benefits of domain specialization by achieving improved performance on legal benchmarks, even with a reduced amount of pretraining data. However, this domain specialization through continued pretraining comes at the cost of increased forgetting in unrelated domains, as evidenced by performance degradation on general knowledge test suites in both Portuguese and English. This study contributes to the growing body of scientific evidence showing that pretraining data selection may enhance the performance of large language models, enabling the exploration of these models at a lower cost. Juru is publicly available at https://huggingface.co/roseval/Juru-7B.

Publicado
29/09/2025
MALAQUIAS JUNIOR, Roseval; PIRES, Ramon; ROMERO, Roseli A. F.; NOGUEIRA, Rodrigo. Juru: Legal Brazilian Large Language Model from Reputable Sources. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 35. , 2025, Fortaleza/CE. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 121-134. ISSN 2643-6264.