Neuroevolutive Strategies for Topology and Weights Adaptation of Artificial Neural Networks

  • L. F. Muniz Universidade Federal do ABC
  • C. N. Lintzmayer Universidade Federal do ABC
  • C. Jutten University Grenoble-Alpes
  • D. G. Fantinato Universidade Estadual de Campinas

Resumo


Among the methods for training Multilayer Perceptron networks, backpropagation is one of the most used ones on problems of supervised learning. However, it presents some limitations, such as local convergence and the a priori choice of the network topology. Another possible approach for training is to use Genetic Algorithms to optimize the weights and topology of networks, which is known as neuroevolution. In this work, we compare the efficiency of training and defining topology with a modified neuroevolution approach using two different metaheuristics with backpropagation on 5 classification problems. The network’s efficiency is assessed through Mutual Information and Information plane. We concluded that neuroevolution found simpler topologies, while backpropagation showed higher efficiency at updating the weights.

Palavras-chave: Artificial Neural Networks, Genetic Algorithms, Neural Evolution

Referências

Bishop, C. M. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006.

Boussaïd, I., Lepagnot, J., and Siarry, P. A survey on optimization metaheuristics. Information sciences vol. 237, pp. 82–117, 2013.

Cover, T. M. and Thomas, J. A. Elements of Information Theory. Wiley-Interscience, 1991.

Elsken, T., Metzen, J. H., and Hutter, F. Neural architecture search: A survey, 2018.

Goodfellow, I., Bengio, Y., and Courville, A. Deep Learning. MIT Press, 2016.

Haykin, S. Neural Networks and Learning Machines. Number v. 10 in Neural networks and learning machines. Prentice Hall, 2009.

Martí, R., Pardalos, P. M., and Resende, M. G. C., editors. Handbook of Heuristics. Springer International Publishing, 2018.

Parzen, E. On estimation of a probability density function and mode. The annals of mathematical statistics 33 (3):1065–1076, 1962.

Shwartz-Ziv, R. and Tishby, N. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810, 2017.

Stanley, K. O., Clune, J., Lehman, J., and Miikkulainen, R. Designing neural networks through neuroevolution. Nature Machine Intelligence 1 (1): 24–35, 2019.

Stanley, K. O. and Miikkulainen, R. Evolving neural networks through augmenting topologies. Tech. Rep. AI-01-290, Department of Computer Sciences, The University of Texas at Austin, 2001.
Publicado
28/11/2022
MUNIZ, L. F.; LINTZMAYER, C. N.; JUTTEN, C.; FANTINATO, D. G.. Neuroevolutive Strategies for Topology and Weights Adaptation of Artificial Neural Networks. In: SYMPOSIUM ON KNOWLEDGE DISCOVERY, MINING AND LEARNING (KDMILE), 10. , 2022, Campinas/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2022 . p. 58-65. ISSN 2763-8944. DOI: https://doi.org/10.5753/kdmile.2022.227807.