Towards Logical Representations of Recurrent Neural Networks

  • Sandro Preto UFABC

Abstract


Representations of neural networks in the formal language of logical systems may be used to enhance their interpretability and as a step in the formal verification of their properties. This work builds on the literature regarding the representation of feedforward neural networks that compute rational McNaughton functions in the language of Łukasiewicz logic. Thus, we propose a technique for representing recurrent computations, enabling the representation of recurrent neural networks. This is an initial investigation conducted within the specific context of simple recurrent networks (SRNs) whose layers compute rational McNaughton functions.

References

Cho, K., van Merrienboer, B., Gülçehre, Ç., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR, abs/1406.1078.

Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14(2):179–211.

Finger, M. and Preto, S. (2020). Probably partially true: Satisfiability for Łukasiewicz infinitely-valued probabilistic logic and related topics. Journal of Automated Reasoning, 64(7):1269–1286.

Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8):1735–1780.

Jurafsky, D. and Martin, J. H. (2025). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition with Language Models. 3rd edition. Online manuscript released January 12, 2025.

McNaughton, R. (1951). A theorem about infinite-valued sentential logic. Journal of Symbolic Logic, 16:1–13.

Mundici, D. (1994). A constructive proof of McNaughton’s theorem in infinite-valued logic. The Journal of Symbolic Logic, 59(2):596–602.

Preto, S. and Finger, M. (2020). An efficient algorithm for representing piecewise linear functions into logic. Electronic Notes in Theoretical Computer Science, 351:167–186. Proceedings of LSFA 2020, the 15th International Workshop on Logical and Semantic Frameworks, with Applications (LSFA 2020).

Preto, S. and Finger, M. (2022). Efficient representation of piecewise linear functions into Łukasiewicz logic modulo satisfiability. Mathematical Structures in Computer Science, 32(9):1119–1144.

Preto, S. and Finger, M. (2023a). Effective reasoning over neural networks using Łukasiewicz logic. In Hitzler, P., Kamruzzaman Sarker, M., and Eberhart, A., editors, Compendium of Neurosymbolic Artificial Intelligence, volume 369 of Frontiers in Artificial Intelligence and Applications, chapter 28, pages 609–630. IOS Press.

Preto, S. and Finger, M. (2023b). Proving properties of binary classification neural networks via Łukasiewicz logic. Logic Journal of the IGPL, 31(5):805–821.

Preto, S. and Finger, M. (To appear in the proceedings of LSFA 2024). Regional, lattice and logical representations of neural networks.

Preto, S., Manyà, F., and Finger, M. (2023). Benchmarking Łukasiewicz logic solvers with properties of neural networks. In 2023 IEEE 53rd International Symposium on Multiple-Valued Logic (ISMVL), pages 158–163.
Published
2025-07-20
PRETO, Sandro. Towards Logical Representations of Recurrent Neural Networks. In: BRAZILIAN WORKSHOP OF LOGIC (WBL), 6. , 2025, Maceió/AL. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 1-8. ISSN 2763-8731. DOI: https://doi.org/10.5753/wbl.2025.6543.