Embedded Topic Models Enhanced by Wikification
Abstract
Topic modeling analyzes a collection of documents to learn meaningful patterns of words. However, previous topic models consider only the spelling of words and do not take into consideration the homography of words. In this study, we incorporate the Wikipedia knowledge into a neural topic model to make it aware of named entities. We evaluate our method on two datasets, 1) news articles of \textit{New York Times} and 2) the AIDA-CoNLL dataset. Our experiments show that our method improves the performance of neural topic models in generalizability. Moreover, we analyze frequent terms in each topic and the temporal dependencies between topics to demonstrate that our entity-aware topic models can capture the time-series development of topics well.
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2024
- DOI:
- 10.48550/arXiv.2410.02441
- arXiv:
- arXiv:2410.02441
- Bibcode:
- 2024arXiv241002441S
- Keywords:
-
- Computer Science - Computation and Language
- E-Print:
- Accepted at EMNLP 2024 Workshop NLP for Wikipedia