Semiquantitative in-context causal reasoning for LLMs
Resumo
We propose a novel algorithm that employs simulations of causal dynamical models and fuzzy logic to push forward the ability of large language models(LLMs) on extracting networks of causal relations from text documents. As a means to that, we explored the ability of LLMs to emulate actual critical reasoning and self-criticize on their own conclusions based on simulations of the gradually improving causal models, which we called In-Context Reasoning. As the self-assessment strategy, an instance of LLM-as-a-judge was employed. The results are encouraging, and we expect to further contribute to the investigations of the potential of LLMs in activities that require more accurate logical and structured reasoning.Referências
Brasil Escola (2024). Deslizamentos de encostas. [link]. Brasil Escola (UOL Educação). Accessed: 2025-08-05.
Dong, Q., Li, L., Dai, D., Zheng, C., Ma, J., Li, R., Xia, H., Xu, J., Wu, Z., Liu, T., Chang, B., Sun, X., Li, L., and Sui, Z. (2024). A survey on in-context learning.
Gansner, E. R. and North, S. C. (2000). An open graph visualization system and its applications to software engineering. Software: Practice and Experience, 30(11):1203–1233.
Gou, Z., Shao, Z., Gong, Y., Shen, Y., Yang, Y., Duan, N., and Chen, W. (2024). Critic: Large language models can self-correct with tool-interactive critiquing.
Li, D., Jiang, B., Huang, L., Beigi, A., Zhao, C., Tan, Z., Bhattacharjee, A., Jiang, Y., Chen, C., Wu, T., Shu, K., Cheng, L., and Liu, H. (2025). From generation to judgment: Opportunities and challenges of llm-as-a-judge.
Liu, B. (2023). Grounding for artificial intelligence.
Xu, Z., Jain, S., and Kankanhalli, M. (2024). Hallucination is Inevitable: An Innate Limitation of Large Language Models. arXiv preprint arXiv:2401.11817.
Dong, Q., Li, L., Dai, D., Zheng, C., Ma, J., Li, R., Xia, H., Xu, J., Wu, Z., Liu, T., Chang, B., Sun, X., Li, L., and Sui, Z. (2024). A survey on in-context learning.
Gansner, E. R. and North, S. C. (2000). An open graph visualization system and its applications to software engineering. Software: Practice and Experience, 30(11):1203–1233.
Gou, Z., Shao, Z., Gong, Y., Shen, Y., Yang, Y., Duan, N., and Chen, W. (2024). Critic: Large language models can self-correct with tool-interactive critiquing.
Li, D., Jiang, B., Huang, L., Beigi, A., Zhao, C., Tan, Z., Bhattacharjee, A., Jiang, Y., Chen, C., Wu, T., Shu, K., Cheng, L., and Liu, H. (2025). From generation to judgment: Opportunities and challenges of llm-as-a-judge.
Liu, B. (2023). Grounding for artificial intelligence.
Xu, Z., Jain, S., and Kankanhalli, M. (2024). Hallucination is Inevitable: An Innate Limitation of Large Language Models. arXiv preprint arXiv:2401.11817.
Publicado
12/11/2025
Como Citar
THIELO, Marcelo R.; ROCHA, Bernardo N..
Semiquantitative in-context causal reasoning for LLMs. In: ESCOLA REGIONAL DE APRENDIZADO DE MÁQUINA E INTELIGÊNCIA ARTIFICIAL DA REGIÃO SUL (ERAMIA-RS), 1. , 2025, Porto Alegre/RS.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 260-263.
DOI: https://doi.org/10.5753/eramiars.2025.16634.