ABSTRACT
Topic Modeling (TM) is a popular approach to extracting and organizing information from large amounts of textual data, by discovering and representing semantic topics from documents. In this paper, we investigate an important challenge in the TM context, namely Topic evaluation, responsible for driving the advances in the field and assessing the overall quality of the topic generation process. Traditional TM metrics capture the quality of topics by strictly evaluating the words that built the topics syntactically (i.e., NPMI, TF-IDF Coherence) or semantically (i.e., WEP). In here, we investigate whether we are approaching the limits of what the current evaluation metrics can assess regarding topic quality for TM. We performed a comprehensive experiment, considering three data collections widely used in automatic classification, for which each document’s topic (class) is known (i.e., ACM, 20News and WebKb). We contrast the quality of topics generated by four of the main TM techniques (i.e., LDA, NMF, CluWords and BerTopic) with the previous topic structure of each collection. Our results show that, despite the importance of the current metrics, they could not capture some important idiosyncratic aspects of TM, indicating the need to propose new metrics that consider, for example, the structure and organization of the documents that comprise the topics.
- 2023. ACM Digital Library. https://dl.acm.org/ Accessed: 2023-05-01.Google Scholar
- Saqib Aziz, Michael Dowling, Helmi Hammami, and Anke Piepenbrink. 2022. Machine learning in finance: A topic modeling approach. European Financial Management 28, 3 (2022), 744–770. https://doi.org/10.1111/eufm.12326Google ScholarCross Ref
- Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. Proceedings of GSCL (2009).Google Scholar
- Rob Churchill and Lisa Singh. 2022. The Evolution of Topic Modeling. ACM Comput. Surv. 54, 10s, Article 215 (nov 2022), 35 pages. https://doi.org/10.1145/3507900Google ScholarDigital Library
- Washington Cunha, Sérgio Canuto, Felipe Viegas, Thiago Salles, Christian Gomes, Vitor Mangaravite, Elaine Resende, Thierson Rosa, Marcos André Gonçalves, and Leonardo Rocha. 2020. Extended pre-processing pipeline for text classification: On the role of meta-feature representations, sparsification and selective sampling. Information Processing & Management 57, 4 (2020), 102263. https://doi.org/10.1016/j.ipm.2020.102263Google ScholarCross Ref
- Washington Cunha, Vítor Mangaravite, Christian Gomes, Sérgio Canuto, Elaine Resende, Cecilia Nascimento, Felipe Viegas, Celso França, Wellington Santos Martins, Jussara M. Almeida, Thierson Rosa, Leonardo Rocha, and Marcos André Gonçalves. 2021. On the cost-effectiveness of neural and non-neural approaches and representations for text classification: A comprehensive comparative study. Information Processing & Management 58, 3 (2021), 102481. https://doi.org/10.1016/j.ipm.2020.102481Google ScholarDigital Library
- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arxiv:1810.04805 [cs.CL]Google Scholar
- Adji B. Dieng, Francisco J. R. Ruiz, and David M. Blei. 2019. Topic Modeling in Embedding Spaces. CoRR abs/1907.04907 (2019). arXiv:1907.04907http://arxiv.org/abs/1907.04907Google Scholar
- Maarten Grootendorst. 2022. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv preprint arXiv:2203.05794 (2022).Google Scholar
- Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval. 50–57.Google ScholarDigital Library
- Antônio Pereira De Souza Júnior, Pablo Cecilio, Felipe Viegas, Washington Cunha, Elisa Tuler De Albergaria, and Leonardo Chaves Dutra Da Rocha. 2022. Evaluating Topic Modeling Pre-Processing Pipelines for Portuguese Texts(WebMedia ’22). Association for Computing Machinery, New York, NY, USA, 191–201. https://doi.org/10.1145/3539637.3557052Google ScholarDigital Library
- Daniel D. Lee and H. Sebastian Seung. 1999. Learning the parts of objects by non-negative matrix factorization. Nature 401, 6755 (1999), 788–791.Google Scholar
- Leland McInnes, John Healy, and James Melville. 2020. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arxiv:1802.03426 [stat.ML]Google Scholar
- Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in Pre-Training Distributed Word Representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA), Miyazaki, Japan. https://aclanthology.org/L18-1008Google Scholar
- Sergey Nikolenko, Sergei Koltsov, and Olessia Koltsova. 2015. Topic modelling for qualitative studies. Journal of Information Science 43 (12 2015). https://doi.org/10.1177/0165551515617393Google ScholarDigital Library
- Sergey I Nikolenko. 2016. Topic quality metrics based on distributed word representations. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. 1029–1032.Google ScholarDigital Library
- Thomas Porturas and R. Andrew Taylor. 2021. Forty years of emergency medicine research: Uncovering research themes and trends through topic modeling. The American Journal of Emergency Medicine 45 (2021), 213–220. https://doi.org/10.1016/j.ajem.2020.08.036Google ScholarCross Ref
- Shahzad Qaiser and Ramsha Ali. 2018. Text Mining: Use of TF-IDF to Examine the Relevance of Words to Documents. International Journal of Computer Applications 181 (07 2018). https://doi.org/10.5120/ijca2018917395Google ScholarCross Ref
- Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arxiv:1908.10084 [cs.CL]Google Scholar
- Michael Röder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In Proceedings of the eighth ACM international conference on Web search and data mining. 399–408.Google ScholarDigital Library
- Priya Shrivastava and Dilip Kumar Sharma. 2021. Fake Content Identification Using Pre-Trained Glove-Embedding. In 2021 5th International Conference on Information Systems and Computer Networks (ISCON), Vol. 1. 1–6. https://doi.org/10.1109/ISCON52037.2021.9702379Google ScholarCross Ref
- Alper Kursat Uysal and Serkan Gunal. 2014. The impact of preprocessing on text classification. Information Processing & Management 50, 1 (2014), 04 – 112. https://doi.org/10.1016/j.ipm.2013.08.006Google ScholarDigital Library
- Felipe Viegas, Sérgio Canuto, Christian Gomes, Washington Luiz, Thierson Rosa, Sabir Ribas, Leonardo Rocha, and Marcos André Gonçalves. 2019. CluWords: exploiting semantic word clustering representation for enhanced topic modeling. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. 753–761.Google ScholarDigital Library
- Felipe Viegas, Washington Cunha, Christian Gomes, Antônio Pereira, Leonardo Rocha, and Marcos Goncalves. 2020. CluHTM - Semantic Hierarchical Topic Modeling based on CluWords. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 8138–8150. https://doi.org/10.18653/v1/2020.acl-main.724Google ScholarCross Ref
- Felipe Viegas, Antônio Pereira, Pablo Cecílio, Elisa Tuler, Wagner Meira Jr, Marcos Gonçalves, and Leonardo Rocha. 2022. Semantic Academic Profiler (SAP): a framework for researcher assessment based on semantic topic modeling. Scientometrics 127, 8 (2022), 5005–5026.Google ScholarDigital Library
- S Vijayarani, Ms J Ilamathi, and Ms Nithya. 2015. Preprocessing techniques for text mining-an overview. International Journal of Computer Science & Communication Networks 5, 1, 7–16.Google Scholar
Index Terms
- Evaluating the Limits of the Current Evaluation Metrics for Topic Modeling
Recommendations
Extractive text summarization using clustering-based topic modeling
AbstractText summarization is the process of converting the input document into a short form, provided that it preserves the overall meaning associated with it. Primarily, text summarization is achieved in two ways, i.e., abstractive and extractive. ...
Jointly Discovering Fine-grained and Coarse-grained Sentiments via Topic Modeling
MM '14: Proceedings of the 22nd ACM international conference on MultimediaThe ever-increasing user-generated contents in social media and other web services make it highly desirable to discover opinions of users on all kinds of topics. Motivated by the assumption that individual word and paragraph in documents will deliver ...
Evaluating topic model interpretability from a primary care physician perspective
A topic model with three different parameter settings is fit to a large collection of clinical reports.The interpretability of discovered topics is evaluated by clinicians and laypersons.Clinicians are significantly more capable of interpreting topics ...
Comments