Evaluating the Limits of the Current Evaluation Metrics for Topic Modeling

  • Antônio Pereira UFSJ
  • Felipe Viegas UFMG
  • Marcos André Gonçalves UFMG
  • Leonardo Rocha UFSJ

Resumo


Topic Modeling (TM) is a popular approach to extracting and organizing information from large amounts of textual data, by discovering and representing semantic topics from documents. In this paper, we investigate an important challenge in the TM context, namely Topic evaluation, responsible for driving the advances in the field and assessing the overall quality of the topic generation process. Traditional TM metrics capture the quality of topics by strictly evaluating the words that built the topics syntactically (i.e., NPMI, TF-IDF Coherence) or semantically (i.e., WEP). In here, we investigate whether we are approaching the limits of what the current evaluation metrics can assess regarding topic quality for TM. We performed a comprehensive experiment, considering three data collections widely used in automatic classification, for which each document’s topic (class) is known (i.e., ACM, 20News and WebKb). We contrast the quality of topics generated by four of the main TM techniques (i.e., LDA, NMF, CluWords and BerTopic) with the previous topic structure of each collection. Our results show that, despite the importance of the current metrics, they could not capture some important idiosyncratic aspects of TM, indicating the need to propose new metrics that consider, for example, the structure and organization of the documents that comprise the topics.

Palavras-chave: Topic Modeling, Experimental Evaluation, Evaluation Metrics

Referências

2023. ACM Digital Library. [link]

Saqib Aziz, Michael Dowling, Helmi Hammami, and Anke Piepenbrink. 2022. Machine learning in finance: A topic modeling approach. European Financial Management 28, 3 (2022), 744–770. https://doi.org/10.1111/eufm.12326

Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. Proceedings of GSCL (2009)

Rob Churchill and Lisa Singh. 2022. The Evolution of Topic Modeling. ACM Comput. Surv. 54, 10s, Article 215 (nov 2022), 35 pages. https://doi.org/10.1145/3507900

Washington Cunha, Sérgio Canuto, Felipe Viegas, Thiago Salles, Christian Gomes, Vitor Mangaravite, Elaine Resende, Thierson Rosa, Marcos André Gonçalves, and Leonardo Rocha. 2020. Extended pre-processing pipeline for text classification: On the role of meta-feature representations, sparsification and selective sampling. Information Processing & Management 57, 4 (2020), 102263. https://doi.org/10.1016/j.ipm.2020.102263

Washington Cunha, Vítor Mangaravite, Christian Gomes, Sérgio Canuto, Elaine Resende, Cecilia Nascimento, Felipe Viegas, Celso França, Wellington Santos Martins, Jussara M. Almeida, Thierson Rosa, Leonardo Rocha, and Marcos André Gonçalves. 2021. On the cost-effectiveness of neural and non-neural approaches and representations for text classification: A comprehensive comparative study. Information Processing & Management 58, 3 (2021), 102481. https://doi.org/10.1016/j.ipm.2020.102481

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arxiv:1810.04805 [cs.CL]

Adji B. Dieng, Francisco J. R. Ruiz, and David M. Blei. 2019. Topic Modeling in Embedding Spaces. CoRR abs/1907.04907 (2019). arXiv:1907.04907 [link]

Maarten Grootendorst. 2022. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv preprint arXiv:2203.05794 (2022)

Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval. 50–57.

Antônio Pereira De Souza Júnior, Pablo Cecilio, Felipe Viegas, Washington Cunha, Elisa Tuler De Albergaria, and Leonardo Chaves Dutra Da Rocha. 2022. Evaluating Topic Modeling Pre-Processing Pipelines for Portuguese Texts(WebMedia ’22). Association for Computing Machinery, New York, NY, USA, 191–201. https://doi.org/10.1145/3539637.3557052

Daniel D. Lee and H. Sebastian Seung. 1999. Learning the parts of objects by non-negative matrix factorization. Nature 401, 6755 (1999), 788–791

Leland McInnes, John Healy, and James Melville. 2020. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arxiv:1802.03426 [stat.ML]

Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in Pre-Training Distributed Word Representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA), Miyazaki, Japan. [link]

Sergey Nikolenko, Sergei Koltsov, and Olessia Koltsova. 2015. Topic modelling for qualitative studies. Journal of Information Science 43 (12 2015). https://doi.org/10.1177/0165551515617393

Sergey I Nikolenko. 2016. Topic quality metrics based on distributed word representations. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. 1029–1032.

Thomas Porturas and R. Andrew Taylor. 2021. Forty years of emergency medicine research: Uncovering research themes and trends through topic modeling. The American Journal of Emergency Medicine 45 (2021), 213–220. https://doi.org/10.1016/j.ajem.2020.08.036

Shahzad Qaiser and Ramsha Ali. 2018. Text Mining: Use of TF-IDF to Examine the Relevance of Words to Documents. International Journal of Computer Applications 181 (07 2018). https://doi.org/10.5120/ijca2018917395

Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arxiv:1908.10084 [cs.CL]

Michael Röder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In Proceedings of the eighth ACM international conference on Web search and data mining. 399–408.

Priya Shrivastava and Dilip Kumar Sharma. 2021. Fake Content Identification Using Pre-Trained Glove-Embedding. In 2021 5th International Conference on Information Systems and Computer Networks (ISCON), Vol. 1. 1–6. https://doi.org/10.1109/ISCON52037.2021.9702379

Alper Kursat Uysal and Serkan Gunal. 2014. The impact of preprocessing on text classification. Information Processing & Management 50, 1 (2014), 04 – 112. https://doi.org/10.1016/j.ipm.2013.08.006

Felipe Viegas, Sérgio Canuto, Christian Gomes, Washington Luiz, Thierson Rosa, Sabir Ribas, Leonardo Rocha, and Marcos André Gonçalves. 2019. CluWords: exploiting semantic word clustering representation for enhanced topic modeling. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. 753–761.

Felipe Viegas, Washington Cunha, Christian Gomes, Antônio Pereira, Leonardo Rocha, and Marcos Goncalves. 2020. CluHTM - Semantic Hierarchical Topic Modeling based on CluWords. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 8138–8150. https://doi.org/10.18653/v1/2020.acl-main.724

Felipe Viegas, Antônio Pereira, Pablo Cecílio, Elisa Tuler, Wagner Meira Jr, Marcos Gonçalves, and Leonardo Rocha. 2022. Semantic Academic Profiler (SAP): a framework for researcher assessment based on semantic topic modeling. Scientometrics 127, 8 (2022), 5005–5026.

S Vijayarani, Ms J Ilamathi, and Ms Nithya. 2015. Preprocessing techniques for text mining-an overview. International Journal of Computer Science & Communication Networks 5, 1, 7–16.
Publicado
23/10/2023
PEREIRA, Antônio; VIEGAS, Felipe; GONÇALVES, Marcos André; ROCHA, Leonardo. Evaluating the Limits of the Current Evaluation Metrics for Topic Modeling. In: BRAZILIAN SYMPOSIUM ON MULTIMEDIA AND THE WEB (WEBMEDIA), 29. , 2023, Ribeirão Preto/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 119–127.

Artigos mais lidos do(s) mesmo(s) autor(es)

1 2 3 4 > >>