Bridging AI and Ethics: An LLM-Based Framework for Transparent and Inclusive Credit Decisions

  • Marcelo Massashi Simonae UTFPR
  • Marlon Marcon UTFPR
  • Dalcimar Casanova UTFPR

Resumo


1) Research Context: The integration of advanced Machine Learning (ML) models into Intelligent Information Systems (IS) has created highly accurate but opaque "black-box" systems, especially in sensitive domains like credit scoring. 2) Scientific and/or Practical Problem: This opacity undermines user trust, can perpetuate algorithmic bias, and challenges regulatory compliance (e.g., LGPD, GDPR). This creates a critical gap between AI’s technical power and the socio-technical need for accountability in IS. 3) Proposed Solution and/or Analysis: We propose and validate a two-layer framework that uses Large Language Models (LLMs) to translate technical outputs from Explainable AI (XAI) methods, like SHAP and LIME, into actionable, natural language narratives for non-expert users. 4) Related IS Theory: Grounded in Decision Support Systems (DSS) theory, this work extends the classical DSS goal. It enhances decision quality not just via predictive accuracy, but by improving the transparency, trustworthiness, and interpretability of the system’s reasoning for stakeholders. 5) Research Method: We conducted an applied, experimental study on a public retail credit dataset. The methodology involved data preprocessing, XGBoost predictive modeling, quantitative evaluation of explanation fidelity with the MEMC metric, and developing a functional web prototype. 6) Summary of Results: The framework effectively identified key credit denial factors with high fidelity, validated by the MEMC metric. The LLM-synthesis layer successfully transformed complex XAI data into clear, understandable, and practical explanations, enhancing the system’s clarity and actionability. 7) Contributions and Impact to IS area: This study contributes a validated framework for building more ethical, transparent, and socially inclusive intelligent systems. Its impact lies in bridging the gap between advanced AI and human-centric requirements, enabling responsible AI adoption and strengthening human-AI collaboration in decision-making.

Referências

Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.

Brasil (2018). Lei Geral de Proteção de Dados Pessoais (LGPD), Lei nº 13.709/2018. [link]. Acesso em: 23 maio 2025.

Caires, D. d. O. and Toledo, C. F. M. (2022). Técnicas de interpretabilidade para aprendizado de máquina: um estudo abordando avaliação de crédito. In Workshop de Matemática, Estatística e Computação Aplicadas à Indústria - WMECAI. Galoá.

Demajo, L. M., Vella, V., and Dingli, A. (2020). Explainable ai for interpretable credit scoring. Papers, arXiv.org.

dos Dados, C. N. (2023). Canal nerd dos dados. [link]. Acesso em: 23 maio 2025.

European Union (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016. [link]. Acesso em: 23 maio 2025.

Faceli, K., Lorena, A. C., Gama, J., and Carvalho, A. C. P. d. L. F. d. (2011). Inteligência artificial: uma abordagem de aprendizado de máquina. LTC.

Hand, D. J. and Henley, W. E. (1997). Statistical classification methods in consumer credit scoring: a review. Journal of the royal statistical society: series a (statistics in society), 160(3):523–541.

Jammalamadaka, K. R. and Itapu, S. (2022). Responsible ai in automated credit scoring systems. AI and Ethics, pages 1–11. Acesso em: 29 maio 2025.

Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Acesso em: 29 maio 2025.

M El-gezawy, A. M., Abdel-Kader, H., and Ali, A. H. (2023). A new xai evaluation metric for classification. IJCI. International Journal of Computers and Information, 10(3):58–62.

Mendes, M. P. (2023). Análise de explicabilidade em modelos de regressão aplicados a dados imobiliários.

Pedroso, F. S., da Silva, G. E., and Brescian, S. A. T. (2019). Concessão de crédito no setor de varejo: estudo aplicado em quatro supermercados no norte matogrossense. Observatorio de la Economía Latinoamericana, (11):9.

Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). Model-agnostic interpretability of machine learning. Acesso em: 30 maio 2025.

Rodrigues, R. B. and Baranauskas, J. A. (2021). Explicabilidade utilizando lime: um estudo de caso para o mercado financeiro. Acesso em: 30 maio 2025.

Rothman, D. (2020). Hands-On Explainable AI (XAI) with Python: Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps. Packt Publishing Ltd.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.

Vilone, G. and Longo, L. (2021). Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion, 76:89–106. Acesso em: 30 maio 2025].
Publicado
25/05/2026
SIMONAE, Marcelo Massashi; MARCON, Marlon; CASANOVA, Dalcimar. Bridging AI and Ethics: An LLM-Based Framework for Transparent and Inclusive Credit Decisions. In: SIMPÓSIO BRASILEIRO DE SISTEMAS DE INFORMAÇÃO (SBSI), 22. , 2026, Vitória/ES. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2026 . p. 364-383. DOI: https://doi.org/10.5753/sbsi.2026.248352.

Artigos mais lidos do(s) mesmo(s) autor(es)

<< < 1 2