Skip to main content

Exploring Text Decoding Methods for Portuguese Legal Text Generation

  • Conference paper
  • First Online:
Intelligent Systems (BRACIS 2023)

Abstract

In recent years, there has been considerable growth in the volume of legal proceedings in Brazil. In this context, there is a lot of potential in using recent advances in Natural Language Processing to automate tasks and analysis in the legal domain. In this article, we investigate text decoding methods for automating the writing of keyphrases, a sequence of key terms present in documents used in courts throughout Brazil. For this purpose, a text-to-text framework based on generative Transformers is used to generate keyphrases and evaluate three decoding techniques: greedy, top-K, and top-p. Since the keyphrases are designed to improve retrieval tasks, we evaluated keyphrases generated by the decoding methods in legal document retrieval. Traditional retrieval methods (TF-IDF and BM25) were used to evaluate the quality of the generated keyphrases. The results obtained (in terms of IR metrics) were statistically significant, and they indicate that greedy decoding generates high-quality keyphrases for the dockets used in this work, providing keyphrases close to the ones generated by human specialists.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://dadosabertos.web.stj.jus.br/.

  2. 2.

    huggingface.co/.

  3. 3.

    https://lucene.apache.org/.

  4. 4.

    https://spacy.io/.

  5. 5.

    https://scikit-learn.org.

  6. 6.

    https://pypi.org/project/rank-bm25/.

References

  1. Althammer, S., Askari, A., Verberne, S., Hanbury, A.: Dossier@ coliee 2021: leveraging dense retrieval and summarization-based re-ranking for case law retrieval. arXiv preprint arXiv:2108.03937 (2021)

  2. Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)

    Google Scholar 

  3. Carmo, D., Piau, M., Campiotti, I., Nogueira, R., Lotufo, R.: PTT5: pretraining and validating the T5 model on Brazilian Portuguese data. arXiv preprint arXiv:2008.09144 (2020)

  4. Fan, A., Lewis, M., Dauphin, Y.N.: Hierarchical neural story generation. CoRR abs/1805.04833 (2018). http://arxiv.org/abs/1805.04833

  5. Feijo, D., Moreira, V.: Summarizing legal rulings: comparative experiments. In: Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pp. 313–322 (2019)

    Google Scholar 

  6. Floridi, L., Chiriatti, M.: GPT-3: its nature, scope, limits, and consequences. Mind. Mach. 30(4), 681–694 (2020)

    Article  Google Scholar 

  7. Guimarães, J.A.C., Santos, J.C.G.: A ementa jurisprudencial como resumo informativo em um domínio especializado: aspectos estruturais. Braz. J. Inf. Sci. 10(3), 32–43 (2016)

    Google Scholar 

  8. Holtzman, A., Buys, J., Forbes, M., Choi, Y.: The curious case of neural text degeneration. CoRR abs/1904.09751 (2019). http://arxiv.org/abs/1904.09751

  9. Huang, Y., Shen, X., Li, C., Ge, J., Luo, B.: Dependency learning for legal judgment prediction with a unified text-to-text transformer. arXiv preprint arXiv:2112.06370 (2021)

  10. Ji, Z., et al.: Survey of hallucination in natural language generation. ACM Comput. Surv. 55(12), 1–38 (2022)

    Article  Google Scholar 

  11. de Justiça CNJ, C.N.: Conselho nacional de justiça - justiça em números (2023). https://www.cnj.jus.br/pesquisas-judiciarias/justica-em-numeros/. Accessed 08 May 2023

  12. Lima, J.P., Costa, J.A., Araújo, D.C.: Comparison of feature extraction methods for Brazilian legal documents clustering. In: 2021 IEEE Latin American Conference on Computational Intelligence (LA-CCI), pp. 1–5. IEEE (2021)

    Google Scholar 

  13. Mandal, A., Ghosh, K., Ghosh, S., Mandal, S.: Unsupervised approaches for measuring textual similarity between legal court case reports. Artif. Intell. Law 29(3), 417–451 (2021). https://doi.org/10.1007/s10506-020-09280-2

    Article  Google Scholar 

  14. Nogueira, R., Yang, W., Lin, J., Cho, K.: Document expansion by query prediction. arXiv preprint arXiv:1904.08375 (2019)

  15. Pedroso, D.D.S.C., Ladeira, M., de Paulo Faleiros, T.: Does semantic search performs better than lexical search in the task of assisting legal opinion writing? In: 2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 680–685. IEEE (2019)

    Google Scholar 

  16. Peric, L., Mijic, S., Stammbach, D., Ash, E.: Legal language modeling with transformers. In: Proceedings of the Fourth Workshop on Automated Semantic Analysis of Information in Legal Text (ASAIL 2020) held online in conjunction with 33rd International Conference on Legal Knowledge and Information Systems (JURIX 2020) 9 December 2020, vol. 2764. CEUR-WS (2020)

    Google Scholar 

  17. Post, M.: A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771 (2018)

  18. Pradeep, R., et al.: H2oloo at TREC 2020: when all you got is a hammer... deep learning, health misinformation, and precision medicine. Corpus 5(d3), d2 (2020)

    Google Scholar 

  19. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)

    Google Scholar 

  20. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(140), 1–67 (2020)

    MathSciNet  Google Scholar 

  21. Robertson, S.E., Walker, S.: Okapi/Keenbow at TREC-8. In: TREC, vol. 8, pp. 151–162. Citeseer (1999)

    Google Scholar 

  22. Rosa, G.M., Bonifacio, L.H., de Souza, L.R., Lotufo, R., Nogueira, R.: A cost-benefit analysis of cross-lingual transfer methods. arXiv preprint arXiv:2105.06813 (2021)

  23. Rosa, G.M., Rodrigues, R.C., Lotufo, R., Nogueira, R.: Yes, BM25 is a strong baseline for legal case retrieval. arXiv preprint arXiv:2105.05686 (2021)

  24. Russell-Rose, T., Chamberlain, J., Azzopardi, L.: Information retrieval in the workplace: a comparison of professional search practices. Inf. Process. Manag. 54(6), 1042–1057 (2018)

    Article  Google Scholar 

  25. Scao, T.L., et al.: BLOOM: a 176B-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 (2022)

  26. Souza, E., et al.: Assessing the impact of stemming algorithms applied to Brazilian legislative documents retrieval. In: Anais do XIII Simpósio Brasileiro de Tecnologia da Informação e da Linguagem Humana, pp. 227–236. SBC (2021)

    Google Scholar 

  27. Souza, F., Nogueira, R., Lotufo, R.: BERTimbau: pretrained BERT models for Brazilian Portuguese. In: Cerri, R., Prati, R.C. (eds.) BRACIS 2020. LNCS (LNAI), vol. 12319, pp. 403–417. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61377-8_28

    Chapter  Google Scholar 

  28. Su, Y., Lan, T., Wang, Y., Yogatama, D., Kong, L., Collier, N.: A contrastive framework for neural text generation. arXiv preprint arXiv:2202.06417 (2022)

  29. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  30. Wagner Filho, J.A., Wilkens, R., Idiart, M., Villavicencio, A.: The brWaC corpus: a new open resource for Brazilian Portuguese. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) (2018)

    Google Scholar 

  31. Xue, L., et al.: MT5: a massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934 (2020)

  32. Yoon, J., Junaid, M., Ali, S., Lee, J.: Abstractive summarization of Korean legal cases using pre-trained language models. In: 2022 16th International Conference on Ubiquitous Information Management and Communication (IMCOM), pp. 1–7. IEEE (2022)

    Google Scholar 

  33. Zhang, S., et al.: OPT: open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 (2022)

Download references

Acknowledgement

This study was supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brazil (CAPES) - Finance Code 001. We thank CEMEAI for granting access to the Euler cluster for the experiments. Also, this work is partially funded by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), grant 2022/01640-2. We would like also to thank INCT (CAPES Concessão 88887.136349/2017-00, CNPQ 465755/2014-3 and FAPESP 2014/50851-0) for the support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kenzo Sakiyama .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sakiyama, K., Montanari, R., Malaquias Junior, R., Nogueira, R., Romero, R.A.F. (2023). Exploring Text Decoding Methods for Portuguese Legal Text Generation. In: Naldi, M.C., Bianchi, R.A.C. (eds) Intelligent Systems. BRACIS 2023. Lecture Notes in Computer Science(), vol 14195. Springer, Cham. https://doi.org/10.1007/978-3-031-45368-7_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45368-7_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45367-0

  • Online ISBN: 978-3-031-45368-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics