Somatic Q-Learning

  • Artur P. Carneiro FEI
  • Danilo H. Perico FEI
  • Reinaldo A. C. Bianchi FEI

Abstract


Reinforcement Learning (RL) is an area of machine learning that utilizes algorithms inspired by biological concepts, where an agent learns from the actions it takes, the resulting states, and the rewards obtained from the environment. In this area, one of the most used algorithms for off-model environments is Q-Learning. This algorithm has some limitations regarding the number of possible actions and the size of the state space, in addition to an exponential increase in training time related to these two variables, making some applications of it unfeasible. This research presents an adaptation of the algorithm, utilizing a mechanism inspired by the functioning of somatic markers, as proposed by António Damásio, to enable the use of Q-Learning in environments where it is infeasible.

References

Bianchi, R., Ros, R., and Mantaras, R. (2009). Improving reinforcement learning by using case based heuristics. pages 75–89.

Cabrera-Paniagua, D., Flores, D., Rubilar-Torrealba, R., and Cubillos, C. (2023). Bio-inspired artificial somatic index for reflecting the travel experience of passenger agents under a flexible transportation scenario. Scientific Reports, 13(1).

Cominelli, L., Mazzei, D., and De Rossi, D. E. (2018). Seai: Social emotional artificial intelligence based on damasio‘s theory of mind. Frontiers in Robotics and AI, 5.

Damásio, A. (1994). O Erro de Descartes. Companhia das Letras.

Eschmann, J. (2021). Reward Function Design in Reinforcement Learning, pages 25–33. Springer International Publishing.

Hoefinghoff, J. and Pauli, J. (2013). Reversal learning based on somatic markers. In 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, pages 498–504.

Maçãs, M., Ventura, R., Custódio, L., and Pinto-Ferreira, C. (2001). DARE: an emotion-based agent architecture. In Russell, I. and Kolen, J. F., editors, Proceedings of the Fourteenth International Florida Artificial Intelligence Research Society Conference, May 21-23, 2001, Key West, Florida, USA, pages 150–154. AAAI Press.

Pimentel, C. F. and Cravo, M. R. (2009). “don’t think too much!” — artificial somatic markers for action selection. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops.

Russell, S. and Norvig, P. (2021). Artificial Intelligence - A Modern Approach - Fourth Edition. Person Education Limited.

Sutton, R. and Barto, A. (2018). Reinforcement Learning: An Introduction - Second Edition. The MIT Press.
Published
2025-09-29
CARNEIRO, Artur P.; PERICO, Danilo H.; BIANCHI, Reinaldo A. C.. Somatic Q-Learning. In: NATIONAL MEETING ON ARTIFICIAL AND COMPUTATIONAL INTELLIGENCE (ENIAC), 22. , 2025, Fortaleza/CE. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 879-890. ISSN 2763-9061. DOI: https://doi.org/10.5753/eniac.2025.14262.