IGMN: An Incremental Gaussian Mixture Network that Learns Instantaneously from Data Flows

  • Milton Roberto Heinen UFRGS
  • Paulo Martins Engel UFRGS
  • Rafael C. Pinto UFRGS

Resumo


This works proposes IGMN (standing for Incremental Gaussian Mixture Network), a new connectionist approach for incremental concept formation and robotic tasks. It is inspired on recent theories about the brain, specially the Memory-Prediction Framework and the Constructivist Artificial Intelligence, which endows it with some unique features that are not present in most ANN models such as MLP and GRNN. Moreover, IGMN is based on strong statistical principles (Gaussian mixture models) and asymptotically converges to the optimal regression surface as more training data arrive. Through several experiments using the proposed model it is demonstrated that IGMN is also robust to overfitting, does not require fine-tunning its configuration parameters and has a very good computational performance, thus allowing its use in real time control applications. Therefore, IGMN is a very useful machine learning tool for incremental function approximation.

Referências

Chaput, H. H. (2004). The Constructivist Learning Architecture: A Model of Cognitive Development for Robust Autonomous Robots. PhD thesis, Univ. Texas, Austin, TX.

Drescher, G. L. (1991). Made-up Minds: A Constructivist Approach to Artificial Intelligence. The MIT Press, Cambridge, MA.

Engel, P. M. and Heinen, M. R. (2010a). Concept formation using incremental Gaussian mixture models. In Proc. 15th Iberoamerican Congr. Pattern Recognition (CIARP), LNCS, São Paulo, SP, Brazil. Springer-Verlag.

Engel, P. M. and Heinen, M. R. (2010b). Incremental learning of multivariate Gaussian mixture models. In Proc. 20th Brazilian Symposium on AI (SBIA), volume 6404 of LNCS, pages 82–91, São Bernardo do Campo, SP, Brazil. Springer-Verlag.

Hawkins, J. (2005). On Intelligence. Owl Books, New York, NY.

Haykin, S. (2008). Neural Networks and Learning Machines. Prentice-Hall, Upper Saddle River, NJ, 3 edition.

Heinen, M. R. (2011). A Connectionist Approach for Incremental Function Approximation and On-line Tasks. Ph.D. thesis, Informatics Institute – Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS, Brazil.

Heinen, M. R. and Engel, P. M. (2010a). An incremental probabilistic neural network for regression and reinforcement learning tasks. In Proc. 20th Int. Conf. Artificial Neural Networks (ICANN 2010), LNCS, Thessaloniki, Greece. Springer-Verlag.

Heinen, M. R. and Engel, P. M. (2010b). IPNN: An incremental probabilistic neural network for function approximation and regression tasks. In Proc. 11th Brazilian Neural Networks Symposium (SBRN), pages 39–44, São Bernardo do Campo, SP, Brazil.

Mitchell, T. (1997). Machine Learning. McGrall-Hill, New York.

Narendra, K. S. and Parthasarathy, K. (1990). Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Networks, 1:4–27.

Piaget, J. (1954). The construction of Reality in the Child. Basic Books, New York, NY.

Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning Internal Representations by Error Propagation. The MIT Press, Cambridge, MA.

Specht, D. F. (1991). A general regression neural network. IEEE Trans. Neural Networks, 2(6):568–576.

Strassen, V. (1969). Gaussian elimination is not optimal. Numerische Mathematik, 13(3):354–356.
Publicado
19/07/2011
HEINEN, Milton Roberto; ENGEL, Paulo Martins; PINTO, Rafael C.. IGMN: An Incremental Gaussian Mixture Network that Learns Instantaneously from Data Flows. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 8. , 2011, Natal/RN. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2011 . p. 488-499. ISSN 2763-9061.