Local explainability of fuzzy and classic models focusing on disaster management in Brazilian municipalities
Resumo
Disasters are socio-environmental processes of losses and damages related to severe or extreme events. The lower the socio-spatial self-protection to deal with such events, the greater the chances of a disaster occurring. In Brazil, when the local damages and losses incurred exceed the own resources from the municipal administration to assist the affected people, the triggering of declaration of emergency issued by Brazilian municipalities is required as a legal instrument to obtain appropriate external support. In this article, we develop a study that applies machine learning and explainable AI techniques to generate fuzzy and classic classification models and interpret predictions with a data set that relates the declaration of emergencies and indicators corresponding to Sustainable Development Goals 1, 3, 6 and 10, from 2016 to 2022. A qualitative analysis was performed on the results provided by the explainable AI techniques that led to the identification of indicators that have the greatest influence on predictions and provided additional support to field researchers and decision makers in the context of disaster response.Referências
Albahri, A. S., Khaleel, Y. L., Habeeb, M. A., Ismael, R. D., Hameed, Q. A., Deveci, M., Homod, R. Z., Albahri, O. S., Alamoodi, A. H., and Alzubaidi, L. (2024). A systematic review of trustworthy artificial intelligence applications in natural disasters. Computers and Electrical Engineering, 118(Part B):Article number: 109409.
Alcala-Fdez, J., Alcala, R., and Herrera, F. (2011). A fuzzy association rule-based classification model for high-dimensional problems with genetic rule selection and lateral tuning. IEEE Transactions on Fuzzy Systems, 19(5):857–872.
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., and Herrera, F. (2020). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:82–115.
Breiman, L. (2001). Random forests. Machine Learning, 45:5–32.
Cao, J., Zhou, T., Zhi, S., Lam, S., Ren, G., Zhang, Y., Wang, Y., Dong, Y., and Cai, J. (2024). Fuzzy inference system with interpretable fuzzy rules: Advancing explain able artificial intelligence for disease diagnosis—a comprehensive review. Information Sciences, 662:120212.
Chi, Z., Yan, H., and Pham., T. (1996). Fuzzy Algorithms: With Applications To Image Processing and Pattern Recognition. World Scientific.
D’Alterio, P., Garibaldi, J. M., and John, R. I. (2020). Constrained interval type-2 fuzzy classification systems for explainable ai (xai). In 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pages 1–8.
Frank, E. and Hall, M. A. (2016). The WEKA Workbench. Online Appendix for ”Data Mining: Practical Machine Learning Tools and Techniques”. Morgan Kauffman, fourth edition.
Mendel, J. M. and Bonissone, P. P. (2021). Critical thinking about explainable ai (xai) for rule-based fuzzy systems. Trans. Fuz Sys., 29(12):3579–3593.
Perry, R. W. and Quarantelli, E. L. (2005). What is a disaster? New answers to old questions. Xlibris Press.
Quarantelli, E. L. (1998). What is a disaster? Perspectives on the question. Routledge.
Quinlan, J. (1993). C4.5: Programs for Machine Learning. Morgan Kauffman.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ”why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 1135–1144, New York, NY, USA. Association for Computing Machinery.
Silva, L. G. T., Matos, A. L., Carvalho, G. G., Valencio, N. F. L. S., and Camargo, H. A. (2024). Explainability of machine learning models with xgboost and shap values in the context of coping with disasters. In Proceedings of the Brazilian Conference on Intelligent Systems, BRACIS 2024, pages 152–166, Berlim. Springer.
Stepin, I., Suffian, M., Catala, A., and Alonso-Moral, J. M. (2024). How to build self-explaining fuzzy systems: From interpretability to explainability [ai-explained]. Comp. Intell. Mag., 19(1):81–82.
Triguero, I., Gonzalez, S., Moyano, J. M., Alcalá-Fdez, S. G. J., Fernandez, J. L. A., del Jesús, M. J., Sanchez, L., and Herrera, F. (2017). Keel 3.0: An open source software for multi-stage analysis in data mining. Int J Comput Intell Syst, 10:1238–1249.
United Nations (2023). The Sustainable Development Goals Report 2023. United Nations Publications, New York, special edition.
Upasane, S. J., Hagras, H., Anisi, M. H., Savill, S., TAYLOR, I., and Manousakis, K. (2024). A type-2 fuzzy based explainable ai system for predictive maintenance within the water pumping industry. IEEE Trans. on Artificial Intelligence, 5(2):490–504.
Valencio, N., Valencio, A., and da Silva Baptista, M. (2022). What lies behind the acute crises: The social and infrasystems links with disasters in brazil. In Iossifova, D., Gasparatos, A., Zavos, S., Gamal, Y., Long, Y., and Yin, Y., editors, Urban Infrastructuring: Reconfigurations, Transformations and Sustainability in the Global South, pages 35–52. Springer Nature.
Alcala-Fdez, J., Alcala, R., and Herrera, F. (2011). A fuzzy association rule-based classification model for high-dimensional problems with genetic rule selection and lateral tuning. IEEE Transactions on Fuzzy Systems, 19(5):857–872.
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., and Herrera, F. (2020). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:82–115.
Breiman, L. (2001). Random forests. Machine Learning, 45:5–32.
Cao, J., Zhou, T., Zhi, S., Lam, S., Ren, G., Zhang, Y., Wang, Y., Dong, Y., and Cai, J. (2024). Fuzzy inference system with interpretable fuzzy rules: Advancing explain able artificial intelligence for disease diagnosis—a comprehensive review. Information Sciences, 662:120212.
Chi, Z., Yan, H., and Pham., T. (1996). Fuzzy Algorithms: With Applications To Image Processing and Pattern Recognition. World Scientific.
D’Alterio, P., Garibaldi, J. M., and John, R. I. (2020). Constrained interval type-2 fuzzy classification systems for explainable ai (xai). In 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pages 1–8.
Frank, E. and Hall, M. A. (2016). The WEKA Workbench. Online Appendix for ”Data Mining: Practical Machine Learning Tools and Techniques”. Morgan Kauffman, fourth edition.
Mendel, J. M. and Bonissone, P. P. (2021). Critical thinking about explainable ai (xai) for rule-based fuzzy systems. Trans. Fuz Sys., 29(12):3579–3593.
Perry, R. W. and Quarantelli, E. L. (2005). What is a disaster? New answers to old questions. Xlibris Press.
Quarantelli, E. L. (1998). What is a disaster? Perspectives on the question. Routledge.
Quinlan, J. (1993). C4.5: Programs for Machine Learning. Morgan Kauffman.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ”why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 1135–1144, New York, NY, USA. Association for Computing Machinery.
Silva, L. G. T., Matos, A. L., Carvalho, G. G., Valencio, N. F. L. S., and Camargo, H. A. (2024). Explainability of machine learning models with xgboost and shap values in the context of coping with disasters. In Proceedings of the Brazilian Conference on Intelligent Systems, BRACIS 2024, pages 152–166, Berlim. Springer.
Stepin, I., Suffian, M., Catala, A., and Alonso-Moral, J. M. (2024). How to build self-explaining fuzzy systems: From interpretability to explainability [ai-explained]. Comp. Intell. Mag., 19(1):81–82.
Triguero, I., Gonzalez, S., Moyano, J. M., Alcalá-Fdez, S. G. J., Fernandez, J. L. A., del Jesús, M. J., Sanchez, L., and Herrera, F. (2017). Keel 3.0: An open source software for multi-stage analysis in data mining. Int J Comput Intell Syst, 10:1238–1249.
United Nations (2023). The Sustainable Development Goals Report 2023. United Nations Publications, New York, special edition.
Upasane, S. J., Hagras, H., Anisi, M. H., Savill, S., TAYLOR, I., and Manousakis, K. (2024). A type-2 fuzzy based explainable ai system for predictive maintenance within the water pumping industry. IEEE Trans. on Artificial Intelligence, 5(2):490–504.
Valencio, N., Valencio, A., and da Silva Baptista, M. (2022). What lies behind the acute crises: The social and infrasystems links with disasters in brazil. In Iossifova, D., Gasparatos, A., Zavos, S., Gamal, Y., Long, Y., and Yin, Y., editors, Urban Infrastructuring: Reconfigurations, Transformations and Sustainability in the Global South, pages 35–52. Springer Nature.
Publicado
29/09/2025
Como Citar
RIBEIRO, Renata; VALENCIO, Norma; CAMARGO, Heloisa.
Local explainability of fuzzy and classic models focusing on disaster management in Brazilian municipalities. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 22. , 2025, Fortaleza/CE.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 1033-1044.
ISSN 2763-9061.
DOI: https://doi.org/10.5753/eniac.2025.14320.
