Reasoning with LLMs aided by Knowledge Bases and by Context
Resumo
Grandes Modelos de Linguagem (GML) podem exibir habilidades de raciocínio surpreendentes através da Chain-of-Thought e técnicas similares. Entretanto, sua dificuldade com alucinações, e a natureza opaca das suas operações internas, são desvantagens importantes. Este trabalho apresenta uma proposta para o raciocínio que emprega um solver lógico auxiliado por GMLs. Nós misturamos raciocínio simbólico com GMLs como apresentado em trabalhos anteriores, e melhoramos as habilidades de raciocínio com a busca por predicados lógicos com o contexto do fluxo lógico de uma base de conhecimento.Referências
Brassard, A., Heinzerling, B., Kavumba, P., & Inui, K. (2022). COPA-SSE: Semi-structured Explanations for Commonsense Reasoning. Proceedings of the 13th Conference on Language Resources and Evaluation.
Creswell, A., Shanahan, M., & Higgins, I. (2023). Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning.
DeepSeek-AI. (2025). DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.
Jin, F., Liu, Y., & Tan, Y. (2024). Zero-Shot Chain-of-Thought Reasoning Guided by Evolutionary Algorithms in Large Language Models.
Kazemi, M., Kim, N., Bhatia, D., & Xu, X. (2023). LAMBADA: Backward Chaining for Automated Reasoning in Natural Language.
Lee, J., & Hwang, W. (2024). SymBa: Symbolic Backward Chaining for Structured Natural Language Reasoning.
OpenAI. (2025, April 04). GPT 4o-mini Release. Retrieved from Portal Open AI: [link]
Saparov, A., & He, H. (2023). LANGUAGE MODELS ARE GREEDY REASONERS - A SYSTEMATIC FORMAL ANALYSIS OF CHAIN-OF-THOUGHT.
Tafjord, O., Mishra, B., & Clark, P. (2021). ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language.
Toroghi, A., Guo, W., Pesaranghader, A., & Sanner, S. (2024). Verifiable, Debuggable, and Repairable Commonsense Logical Reasoning via LLM-based Theory Resolution. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, (pp. 6634–6652).
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., . . . Polosukhin, I. (2017). Attention Is All You Need.
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., . . . Zhou, D. (2023). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.
Williams, A., Nangia, N., & Bowman, S. (2018). A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference.
Zhou, D., Scharli, N., Hou, L., Wei, J., Scales, N., Wang, X., . . . Chi, E. (2023). LEAST-TO-MOST PROMPTING ENABLES COMPLEX REASONING IN LARGE LANGUAGE MODELS.
Creswell, A., Shanahan, M., & Higgins, I. (2023). Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning.
DeepSeek-AI. (2025). DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.
Jin, F., Liu, Y., & Tan, Y. (2024). Zero-Shot Chain-of-Thought Reasoning Guided by Evolutionary Algorithms in Large Language Models.
Kazemi, M., Kim, N., Bhatia, D., & Xu, X. (2023). LAMBADA: Backward Chaining for Automated Reasoning in Natural Language.
Lee, J., & Hwang, W. (2024). SymBa: Symbolic Backward Chaining for Structured Natural Language Reasoning.
OpenAI. (2025, April 04). GPT 4o-mini Release. Retrieved from Portal Open AI: [link]
Saparov, A., & He, H. (2023). LANGUAGE MODELS ARE GREEDY REASONERS - A SYSTEMATIC FORMAL ANALYSIS OF CHAIN-OF-THOUGHT.
Tafjord, O., Mishra, B., & Clark, P. (2021). ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language.
Toroghi, A., Guo, W., Pesaranghader, A., & Sanner, S. (2024). Verifiable, Debuggable, and Repairable Commonsense Logical Reasoning via LLM-based Theory Resolution. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, (pp. 6634–6652).
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., . . . Polosukhin, I. (2017). Attention Is All You Need.
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., . . . Zhou, D. (2023). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.
Williams, A., Nangia, N., & Bowman, S. (2018). A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference.
Zhou, D., Scharli, N., Hou, L., Wei, J., Scales, N., Wang, X., . . . Chi, E. (2023). LEAST-TO-MOST PROMPTING ENABLES COMPLEX REASONING IN LARGE LANGUAGE MODELS.
Publicado
29/09/2025
Como Citar
GARCEZ, Leonardo Riccioppo; COZMAN, Fabio Gagliardi.
Reasoning with LLMs aided by Knowledge Bases and by Context. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 22. , 2025, Fortaleza/CE.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 249-260.
ISSN 2763-9061.
DOI: https://doi.org/10.5753/eniac.2025.12362.
