Contextualized Topic Models are based on the Neural-ProdLDA variational autoencoding approach by Srivastava and Sutton (2017).
This approach trains an encoding neural network to map pre-trained contextualized word embeddings (e.g., BERT) to latent representations. Those latent representations are sampled variationally from a Gaussian distribution $N(\mu, \sigma^2)$ and passed to a decoder network that has to reconstruct the document bag-of-word representation.
Source: Cross-lingual Contextualized Topic Models with Zero-shot LearningPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Topic Models | 7 | 33.33% |
Bayesian Optimization | 2 | 9.52% |
Clustering | 1 | 4.76% |
Document Embedding | 1 | 4.76% |
Document Classification | 1 | 4.76% |
Bayesian Optimisation | 1 | 4.76% |
Classification | 1 | 4.76% |
Cross-Lingual Transfer | 1 | 4.76% |
General Classification | 1 | 4.76% |