eXplainable Artificial Intelligence in sentiment analysis of posts about Covid-19 vaccination on Twitter

  • Juliana Da Costa Feitosa UNESP
  • Luiz Felipe De Camargo UNESP
  • Eloisa Bonatti UNESP
  • Giovanna Simioni UNESP
  • José Remo Ferreira Brega UNESP

Resumo


Considering the impact of the use of Artificial Intelligence (AI) in the most diverse branches of society and the use of eXplicable Artificial Intelligence (XAI) to improve the interpretability of these intelligent models, this paper aims to analyze some existing XAI methods to verify their effectiveness. To this end, experiments were conducted with LIME, SHAP, and Eli5 solutions in a scenario of sentiment classifications in Twitter posts about the Covid-19 vaccination process in Brazil. Thus, it is observed that the tools provide relevant information about the aspects that interfere in the classification of tweets as favorable or not favorable to vaccination, which allows concluding that the methods bring the necessary transparency to confirm the AI decisions regarding the sentiments related to the vaccination process in Brazil.

Palavras-chave: explicable artificial intelligence, explainability, sentiment analysis, COVID-19

Referências

Sercan Ö. Arik and Tomas Pfister. 2021. TabNet: Attentive Interpretable Tabular Learning. Proceedings of the AAAI Conference on Artificial Intelligence 35, 8 (May 2021), 6679–6687. https://doi.org/10.1609/aaai.v35i8.16826

Diogo M Camacho, Katherine M Collins, Rani K Powers, James C Costello, and James J Collins. 2018. Next-generation machine learning for biological networks. Cell 173, 7 (2018), 1581–1592

X. Cui, J.M. Lee, and J. Po-An Hsieh. 2019. An integrative 3C evaluation framework for explainable artificial intelligence, In AMCIS 2019 Proceedings. 25th Americas Conference on Information Systems, AMCIS 2019 (2019), 1–10. cited By 0 [link].

Ministério da Saúde. 2021. [link]

Benjamin P. Evans, Bing Xue, and Mengjie Zhang. 2019. What’s inside the Black-Box? A Genetic Programming Method for Interpreting Complex Machine Learning Models. In Proceedings of the Genetic and Evolutionary Computation Conference (Prague, Czech Republic) (GECCO ’19). Association for Computing Machinery, New York, NY, USA, 1012–1020. https://doi.org/10.1145/3321707.3321726

J.-M. Fellous, G. Sapiro, A. Rossi, H. Mayberg, and M. Ferrante. 2019. Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation. Frontiers in Neuroscience 13 (2019). https://doi.org/10.3389/fnins.2019.01346

Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. Processing 150 (01 2009)

David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web 2 (2017), 2

David Hardage and Peyman Najafirad. 2020. Hate and Toxic Speech Detection in the Context of Covid-19 Pandemic using XAI: Ongoing Applied Research. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020. Association for Computational Linguistics, Online. https://doi.org/10.18653/v1/2020.nlpcovid19-2.36

Paul Harmon, Rex Maus, and William Morrissey. 1988. Expert systems: tools and applications. John Wiley & Sons, Inc

Dora Kaufman. 2019. A inteligência artificial irá suplantar a inteligência humana?Estação das letras e cores EDI

M Korobov. 2017. Explaining behavior of Machine Learning models with eli5 library. In Proceedings of the EuroPython Congress

Bing Liu. 2012. Sentiment Analysis and Opinion Mining. Springer International Publishing. https://doi.org/10.1007/978-3-031-02145-9

Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, United States, 4765–4774

John McCarthy, Marvin L Minsky, Nathaniel Rochester, and Claude E Shannon. 2006. A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI magazine 27, 4 (2006), 12–12

Donald Michie, David J Spiegelhalter, CC Taylor, 1994. Machine learning. Neural and Statistical Classification 13, 1994 (1994), 1–298

Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining Explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 279–288. https://doi.org/10.1145/3287560.3287574

Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations. Association for Computational Linguistics, San Diego, California, 97–101. https://doi.org/10.18653/v1/N16-3020

Stuart J Russell and Peter Norvig. 2004. Inteligência artificial. Elsevier, Amsterdam, The Netherlands

Nadia Felix Felipe da Silva. 2016. Análise de sentimentos em textos curtos provenientes de redes sociais. Ph. D. Dissertation. Universidade de São Paulo

Alex J Smola and Bernhard Schölkopf. 2004. A tutorial on support vector regression. Statistics and computing 14 (2004), 199–222

Ah-Hwee Tan 1999. Text mining: The state of the art and the challenges. In Proceedings of the pakdd 1999 workshop on knowledge disocovery from advanced databases, Vol. 8. 65–70

Alan M Turing. 2009. Computing machinery and intelligence. Springer

Wil Van Der Aalst and Wil van der Aalst. 2016. Data science in action. Springer

Rosina O. Weber, Adam J. Johs, Jianfei Li, and Kent Huang. 2018. Investigating Textual Case-Based XAI. In Case-Based Reasoning Research and Development, Michael T. Cox, Peter Funk, and Shahina Begum (Eds.). Vol. 11156. Springer International Publishing, 431–447. Series Title: Lecture Notes in Computer Science https://doi.org/10.1007/978-3-030-01081-2_29

Christine T. Wolf and Kathryn E. Ringland. 2020. Designing Accessible, Explainable AI (XAI) Experiences. SIGACCESS Access. Comput.125, Article 6 (March 2020), 1 pages. https://doi.org/10.1145/3386296.3386302

Feiyu Xu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao, and Jun Zhu. 2019. Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges. In Natural Language Processing and Chinese Computing, Jie Tang, Min-Yen Kan, Dongyan Zhao, Sujian Li, and Hongying Zan (Eds.). Vol. 11839. Springer International Publishing, 563–574. Series Title: Lecture Notes in Computer Science. https://doi.org/10.1007/978-3-030-32236-6_51

Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L. Arendt. 2020. How Do Visual Explanations Foster End Users’ Appropriate Trust in Machine Learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI ’20). Association for Computing Machinery, New York, NY, USA, 189–201. https://doi.org/10.1145/3377325.3377480

Arjumand Younus, M Atif Qureshi, Mingyeong Jeon, Arefeh Kazemi, and Simon Caton. 2022. XAI Analysis of Online Activism to Capture Integration in Irish Society Through Twitter. In Social Informatics: 13th International Conference, SocInfo 2022, Glasgow, UK, October 19–21, 2022, Proceedings. Springer, Springer-Verlag, Berlin, Heidelberg, 233–244.
Publicado
23/10/2023
Como Citar

Selecione um Formato
FEITOSA, Juliana Da Costa; DE CAMARGO, Luiz Felipe; BONATTI, Eloisa; SIMIONI, Giovanna; BREGA, José Remo Ferreira. eXplainable Artificial Intelligence in sentiment analysis of posts about Covid-19 vaccination on Twitter. In: SIMPÓSIO BRASILEIRO DE SISTEMAS MULTIMÍDIA E WEB (WEBMEDIA), 29. , 2023, Ribeirão Preto/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 65–72.