Published November 4, 2019 | Version v1
Conference paper Open

Neural Topic Model with Reinforcement Learning

  • 1. University of Warwick
  • 2. Harbin Institute of Technology

Description

In recent years, advances in neural variational inference have achieved many successes in text processing. Examples include neural topic models which are typically built upon variational autoencoder (VAE) with an objective of minimising the error of reconstructing original documents based on the learned latent topic vectors. However, minimising reconstruction errors does not necessarily lead to high quality topics. In this paper, we borrow the idea of reinforcement learning and incorporate topic co herence measures as reward signals to guide the learning of a VAE-based topic model. Furthermore, our proposed model is able to automatically separating background words dynamically from topic words, thus eliminating the pre-processing step of fifiltering infrequent and/or top frequent words, typically required for learning traditional topic models. Experimental results on the 20 Newsgroups and the NIPS datasets show superior performance both on perplexity and topic coherence measure compared to state-of-the-art neural topic models.

Files

topic_model_with_rl-10.pdf

Files (296.5 kB)

Name Size Download all
md5:9a200b45b546d3035ae6540680d32012
296.5 kB Preview Download

Additional details

Funding

European Commission
DeepPatient – Deep Understanding of Patient Experience of Healthcare from Social Media 794196