A oclusão pode facilitar a compreensão humana? Avaliação de explicabilidade no reconhecimento de entidades nomeadas
Resumo
Técnicas de Explicabilidade são métodos que auxiliam usuários a entender os resultados de um modelo de aprendizado de máquina. Nesse contexto, este trabalho investiga se a técnica de explicabilidade de Oclusão consegue gerar respostas similares às esperadas por humanos na classificação de palavras para o Reconhecimento de Entidades Nomeadas. Para isso, utilizou-se uma LSTM bidirecional e o conjunto de dados CoNLL 2003, bem como foi utilizado a anotação manual de 849 sentenças criando-se, assim, uma base de dados de referência. Os resultados mostram que a Oclusão é capaz de indicar pelo menos uma palavra relevante e compatível com a compreensão humana.Referências
Adadi, A. and Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (xai). IEEE Access, 6:52138–52160.
Arras, L., Montavon, G., Müller, K.-R., and Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206.
Brin, S. and Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Computer networks and ISDN systems, 30(1):107–117.
Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B., and Sen, P. (2020). A survey of the state of explainable ai for natural language processing. arXiv preprint arXiv:2010.00711.
Goodman, B. and Flaxman, S. (2017). European union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3):50–57.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., and Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):1–42.
Harbecke, D. (2021). Explaining natural language processing classifiers with occlusion and language modeling. arXiv preprint arXiv:2101.11889.
Hu, J. (2018). Explainable deep learning for natural language processing.
Liu, X., Chen, H., and Xia, W. (2022). Overview of named entity recognition. Journal of Contemporary Educational Research, 6(5):65–68.
Pedreshi, D., Ruggieri, S., and Turini, F. (2008). Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 560–568.
Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543.
Robnik-Šikonja, M. and Kononenko, I. (2008). Explaining classifications for individual instances. IEEE Transactions on Knowledge and Data Engineering, 20(5):589–600.
Russell-Rose, T., Stevenson, M., and Whitehead, M. (2002). The reuters corpus volume 1-from yesterday’s news to tomorrow’s language resources.
Sang, E. F. and De Meulder, F. (2003). Introduction to the conll-2003 shared task: Language-independent named entity recognition. arXiv preprint cs/0306050.
Schweter, S. and Akbik, A. (2020). Flert: Document-level features for named entity recognition. arXiv preprint arXiv:2011.06993.
Siddharthan, A. (2002). Christopher d. manning and hinrich schutze. foundations of statistical natural language processing. mit press, 2000. isbn 0-262-13360-1. 620 pp. 64.95/£44.95(cloth). Natural Language Engineering, 8(1):91–92.
Vajjala, S. and Balasubramaniam, R. (2022). What do we really know about state of the art ner? arXiv preprint arXiv:2205.00034.
Van Lent, M., Fisher, W., and Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. In Proceedings of the national conference on artificial intelligence, pages 900–907. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.
Wallace, E., Feng, S., and Boyd-Graber, J. (2018). Interpreting neural networks with nearest neighbors. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 136–144, Brussels, Belgium. Association for Computational Linguistics.
Wang, X., Jiang, Y., Bach, N., Wang, T., Huang, Z., Huang, F., and Tu, K. (2020). Automated concatenation of embeddings for structured prediction. arXiv preprint arXiv:2010.05006.
Yamada, I., Asai, A., Shindo, H., Takeda, H., and Matsumoto, Y. (2020). Luke: Deep contextualized entity representations with entity-aware self-attention. arXiv preprint arXiv:2010.01057.
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., and Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.
Arras, L., Montavon, G., Müller, K.-R., and Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206.
Brin, S. and Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Computer networks and ISDN systems, 30(1):107–117.
Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B., and Sen, P. (2020). A survey of the state of explainable ai for natural language processing. arXiv preprint arXiv:2010.00711.
Goodman, B. and Flaxman, S. (2017). European union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3):50–57.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., and Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):1–42.
Harbecke, D. (2021). Explaining natural language processing classifiers with occlusion and language modeling. arXiv preprint arXiv:2101.11889.
Hu, J. (2018). Explainable deep learning for natural language processing.
Liu, X., Chen, H., and Xia, W. (2022). Overview of named entity recognition. Journal of Contemporary Educational Research, 6(5):65–68.
Pedreshi, D., Ruggieri, S., and Turini, F. (2008). Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 560–568.
Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543.
Robnik-Šikonja, M. and Kononenko, I. (2008). Explaining classifications for individual instances. IEEE Transactions on Knowledge and Data Engineering, 20(5):589–600.
Russell-Rose, T., Stevenson, M., and Whitehead, M. (2002). The reuters corpus volume 1-from yesterday’s news to tomorrow’s language resources.
Sang, E. F. and De Meulder, F. (2003). Introduction to the conll-2003 shared task: Language-independent named entity recognition. arXiv preprint cs/0306050.
Schweter, S. and Akbik, A. (2020). Flert: Document-level features for named entity recognition. arXiv preprint arXiv:2011.06993.
Siddharthan, A. (2002). Christopher d. manning and hinrich schutze. foundations of statistical natural language processing. mit press, 2000. isbn 0-262-13360-1. 620 pp. 64.95/£44.95(cloth). Natural Language Engineering, 8(1):91–92.
Vajjala, S. and Balasubramaniam, R. (2022). What do we really know about state of the art ner? arXiv preprint arXiv:2205.00034.
Van Lent, M., Fisher, W., and Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. In Proceedings of the national conference on artificial intelligence, pages 900–907. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.
Wallace, E., Feng, S., and Boyd-Graber, J. (2018). Interpreting neural networks with nearest neighbors. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 136–144, Brussels, Belgium. Association for Computational Linguistics.
Wang, X., Jiang, Y., Bach, N., Wang, T., Huang, Z., Huang, F., and Tu, K. (2020). Automated concatenation of embeddings for structured prediction. arXiv preprint arXiv:2010.05006.
Yamada, I., Asai, A., Shindo, H., Takeda, H., and Matsumoto, Y. (2020). Luke: Deep contextualized entity representations with entity-aware self-attention. arXiv preprint arXiv:2010.01057.
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., and Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.
Publicado
21/07/2024
Como Citar
GOMES, Alexandre Augusto Aguiar; BRAGA, Leonidas J. F.; AZEVEDO, Marcos P. C.; ASSUNÇÃO, Gabriel; CARVALHO, Arthur; BRANDÃO, Michele A.; DALIP, Daniel H.; PÁDUA, Flávio Cardeal.
A oclusão pode facilitar a compreensão humana? Avaliação de explicabilidade no reconhecimento de entidades nomeadas. In: WORKSHOP EM DESEMPENHO DE SISTEMAS COMPUTACIONAIS E DE COMUNICAÇÃO (WPERFORMANCE), 23. , 2024, Brasília/DF.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 13-24.
ISSN 2595-6167.
DOI: https://doi.org/10.5753/wperformance.2024.2348.