Meta-Aprendizado de Algoritmos de Treinamento para Redes Multi-Layer Perceptron
Resumo
Meta-Aprendizado tem como objetivo associar o desempenho de algoritmos de aprendizado com as características dos problemas em que são aplicados. Nesse trabalho, investigamos o uso de Meta-Aprendizado para predizer o desempenho dos algoritmos Backpropagation (BP) e Levenberg-Marquardt (LM), usados no treinamento de redes MLP. Os meta-exemplos foram gerados a partir da aplicação dos algoritmos BP e LM em 50 problemas de regressão. Cada meta-exemplo armazenou 10 características descrevendo um problema específico, e um atributo classe indicando o padrão específico de desempenho obtido pelos algoritmos no problema. Três classificadores foram avaliados para predizer a classe de desempenho dos algoritmos, com resultados promissores.Referências
Bensusan, H. and Alexandros, K. (2001). Estimating the predictive accuracy of a classifier. In Proceedings of the 12th European Conference on Machine Learning, pages 25–36.
Brazdil, P., Soares, C., and da Costa, J. (2003). Ranking learning algorithms: Using IBL and meta-learning on accuracy and time results. Machine Learning, 50(3):251–277.
Demuth, H. and Beale, M. (1993). Neural Network Toolbox: For use with MATLAB: User’s Guide. The Mathworks.
dos Santos, P., Ludermir, T. B., and Prudêncio, R. B. C. (2004). Selection of time series forecasting models based on performance information. In 4th International Conference on Hybrid Intelligent Systems, pages 366–371.
Giraud-Carrier, C., Vilalta, R., and Brazdil, P. (2004). Introduction to the special issue on meta-learning. Machine Learning, 54(3):187–193.
Kalousis, A., Gama, J., and Hilario, M. (2004). On data and algorithms understanding inductive performance. Machine Learning, 54(3):275–312.
Kalousis, A. and Theoharis, T. (1999). Noemon: Design, implementation and performance results of an intelligent assistant for classifier selection. Intelligent Data Analysis, 3(5):319–337.
Leite, R. and Brazdil, P. (2005). Predicting relative performance of classifiers from samples. In Proceedings of the 22nd International Conference on Machine Learning.
Levenberg, K. (1944). A method for the solution of certain non-linear problems in least squares. Quarterly Journal of Applied Mathmatics, II(2):164–168.
Michie, D., Spiegelhalter, D., and Taylor, C. (1994). Machine Learning, Neural and Statistical Classification. Ellis Horwood.
Mitchel, T., editor (1997). Machine Learning. MacGraw Hill, New York.
Prechelt, L. (1994). A set of neural network benckmark problems and benchmarking rules. Technical Report 21/94, Fakultät für Information, Universitä Karlsruhe, Karlsruhe, Germany.
Prudêncio, R. B. C. and Ludermir, T. B. (2004). Meta-learning approaches to selecting time series models. Neurocomputing, 61:121–137.
Prudêncio, R. B. C., Ludermir, T. B., and de Carvalho, F. A. T. (2004). A modal symbolic classifier to select time series models. Pattern Recognition Letters, 25(8):911–921.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by backpropagating errors. Nature, 323:533–536.
Vilalta, R. and Drissi, Y. (2002). A perspective view and survey of meta-learning. Journal of Artificial Intelligence Review, 18(2):77–95.
Witten, I. H. and Frank, E., editors (2003). WEKA: machine learning algorithms in Java. University of Waikato, New Zealand.
Brazdil, P., Soares, C., and da Costa, J. (2003). Ranking learning algorithms: Using IBL and meta-learning on accuracy and time results. Machine Learning, 50(3):251–277.
Demuth, H. and Beale, M. (1993). Neural Network Toolbox: For use with MATLAB: User’s Guide. The Mathworks.
dos Santos, P., Ludermir, T. B., and Prudêncio, R. B. C. (2004). Selection of time series forecasting models based on performance information. In 4th International Conference on Hybrid Intelligent Systems, pages 366–371.
Giraud-Carrier, C., Vilalta, R., and Brazdil, P. (2004). Introduction to the special issue on meta-learning. Machine Learning, 54(3):187–193.
Kalousis, A., Gama, J., and Hilario, M. (2004). On data and algorithms understanding inductive performance. Machine Learning, 54(3):275–312.
Kalousis, A. and Theoharis, T. (1999). Noemon: Design, implementation and performance results of an intelligent assistant for classifier selection. Intelligent Data Analysis, 3(5):319–337.
Leite, R. and Brazdil, P. (2005). Predicting relative performance of classifiers from samples. In Proceedings of the 22nd International Conference on Machine Learning.
Levenberg, K. (1944). A method for the solution of certain non-linear problems in least squares. Quarterly Journal of Applied Mathmatics, II(2):164–168.
Michie, D., Spiegelhalter, D., and Taylor, C. (1994). Machine Learning, Neural and Statistical Classification. Ellis Horwood.
Mitchel, T., editor (1997). Machine Learning. MacGraw Hill, New York.
Prechelt, L. (1994). A set of neural network benckmark problems and benchmarking rules. Technical Report 21/94, Fakultät für Information, Universitä Karlsruhe, Karlsruhe, Germany.
Prudêncio, R. B. C. and Ludermir, T. B. (2004). Meta-learning approaches to selecting time series models. Neurocomputing, 61:121–137.
Prudêncio, R. B. C., Ludermir, T. B., and de Carvalho, F. A. T. (2004). A modal symbolic classifier to select time series models. Pattern Recognition Letters, 25(8):911–921.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by backpropagating errors. Nature, 323:533–536.
Vilalta, R. and Drissi, Y. (2002). A perspective view and survey of meta-learning. Journal of Artificial Intelligence Review, 18(2):77–95.
Witten, I. H. and Frank, E., editors (2003). WEKA: machine learning algorithms in Java. University of Waikato, New Zealand.
Publicado
30/06/2007
Como Citar
GUERRA, Silvio; PRUDÊNCIO, Ricardo; LUDERMIR, Teresa.
Meta-Aprendizado de Algoritmos de Treinamento para Redes Multi-Layer Perceptron. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 6. , 2007, Rio de Janeiro/RJ.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2007
.
p. 1022-1031.
ISSN 2763-9061.
