Multi-Level Stacking

  • Fabiana Coutinho Boldrin USP
  • Adriano Henrique Cantão USP
  • Renato Tinós USP
  • José Augusto Baranauskas USP

Resumo


Stacking é um dos algoritmos que combina os resultados de diferentes classificadores que foram gerados utilizando o mesmo conjunto de treinamento. Com objetivo de explorar alguns aspectos com relação ao algoritmo de stacking como o número de levels (camadas) de aprendizado, o número de classificadores por level e os algoritmos de utilizados, foi proposto o multi-level stacking. Para este trabalho foram feitos experimentos utilizando três tipos diferentes de indutores para diferentes datasets com dois levels de aprendizado.

Referências

Aggarwal, C. C. (2014). Data Classification: Algorithms and Applications. Chapman and Hall/CRC.

Aha, D. W. (1997). Lazy learning. Artificial Intelligence Review, 11:7-10.

Anderson, M. L. and Oates, T. (2007). A review of recent research in metareasoning and metalearning. AI Magazine, 28(1):12.

Bayes, T. (1763). Lii. an essay towards solving a problem in the doctrine of chances. by the late rev. mr. bayes, frs communicated by mr. price, in a letter to john canton, amfr s. Philosophical transactions of the Royal Society of London, (53):370-418.

Breiman, L., Friedman, J., Olshen, R., and Stone, C. (1984). Classification and Regression Trees. Wadsworth & Books, Pacific Grove, CA.

Caffé, M. I. R., Perez, P. S., and Baranauskas, J. A. (2012). Evaluation of stacking on biomedical data. Journal of Health Informatics, ISSN 2175-4411, 4(3):67-72. http://www.jhi-sbis.saude.ws/ojs-jhi/index.php/jhi-sbis/article/view/181/119.

Chen, T. and Guestrin, C. (2016). XGBoost. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM.

Dietterich, T. (2000). Janemachine learning research: Four current directions. AI Magazine, 18.

Dyrmishi, S., Elshawi, R., and Sakr, S. (2019). A decision support framework for automl systems: A meta-learning approach. In 2019 International Conference on Data Mining Workshops (ICDMW), pages 97-106.

Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recogn. Lett., 27(8):861-874.

Fernández-Delgado, M., Cernadas, E., Barro, S., and Amorim, D. (2014). Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res., 15(1):3133-3181.

Ganaie, M., Hu, M., Malik, A., Tanveer, M., and Suganthan, P. (2022). Ensemble deep learning: A review. Engineering Applications of Artificial Intelligence, 115:105151.

Giraud-Carrier, C., V. R. . B. (2004). Introduction to the special issue on meta-learning. In Michalski, R. S., Carbonell, J. G., and Mitchell, T. M., editors, Machine Learning, page 187-193.

Grabczewski, K. (2014). Meta-Learning in Decision Tree Induction, volume 498 of Studies in Computational Intelligence. Springer.

Guldogan, E. (2022). Artificial intelligence-assisted prediction of covid-19 status based on thorax ct scans using a proposed meta-learning strategy. Machine Learning, 38(3):1515-1521.

Jaderberg, M., Dalibard, V., Osindero, S., Czarnecki, W. M., Donahue, J., Razavi, A., Vinyals, O., Green, T., Dunning, I., Simonyan, K., Fernando, C., and Kavukcuoglu, K. (2017). Population based training of neural networks. CoRR, abs/1711.09846.

Karmaker Santu, S. K., Hassan, M. M., Smith, M. J., Xu, L., Zhai, C., and Veeramachaneni, K. (2021). Automl to date and beyond: Challenges and opportunities. ACM Comput. Surv., 54(8).

Ludmila I. Kuncheva, J. C. B. and Duin, R. P. W. (2001). Decision templates for multiple classifier fusion: an experimental comparison. In Pattern Recognition, number 34, pages 299-314. https://doi.org/10.1016/S0031-3203(99)00223-X.

Massaoudi, M., Refaat, S. S., Chihi, I., Trabelsi, M., Oueslati, F. S., and Abu-Rub, H. (2021). A novel stacked generalization ensemble-based hybrid lgbm-xgb-mlp model for short-term load forecasting. Energy, 214:118874.

Merz, C. (1999). Using correspondence analysis to combine classifiers. In Machine Learning, number 36, page 33-58. https://link.springer.com/article/10.1023/A:1007559205422.

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.

Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, 1:81-106.

T. Nguyen, Thi Thu Thuy Nguyen, X. C. P. and Liew, A. W.-C. (2016). A novel combining classifier method based on variational inference. In Pattern Recognition, number 49, page 198-212. https://doi.org/10.1016/j.patcog.2015.06.016.

Thrun, S. and Pratt, L. (1998). Learning to Learn: Introduction and Overview, pages 3-17. Springer US, Boston, MA.

Wang, J., Yuan, F., Chen, J., Wu, Q., Li, C., Yang, M., Sun, Y., and Zhang, G. (2021). Stackrec: Efficient training of very deep sequential recommender models by iterative stacking. Proceedings of the 44th International ACM SIGIR conference on Research and Development in Information Retrieval.

Wang, J. X. (2020). Meta-learning in natural and artificial intelligence. CoRR, abs/2011.13464.

Witten, I. H. and Ting, K. M. (1997). Stacking bagged and dagged models. In In Proceedings of the Fourteenth International Conference on Machine Learning (ICML '97). Morgan Kaufmann Publishers Inc., page 367-375, San Francisco, CA, USA. [link].

Wolpert, D. H. (1992). Stacked generalization. Neural networks, 5(2):241-259.

Xu, Z., van Hasselt, H. P., and Silver, D. (2018). Meta-gradient reinforcement learning. 31.

Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H., and He, Q. (2020). A comprehensive survey on transfer learning.
Publicado
28/11/2022
Como Citar

Selecione um Formato
BOLDRIN, Fabiana Coutinho; CANTÃO, Adriano Henrique; TINÓS, Renato; BARANAUSKAS, José Augusto. Multi-Level Stacking. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 19. , 2022, Campinas/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2022 . p. 1-12. ISSN 2763-9061. DOI: https://doi.org/10.5753/eniac.2022.227346.

Artigos mais lidos do(s) mesmo(s) autor(es)