Abstract:
We propose a generalized probabilistic topic model of text corpora which can incorporate heuristics of Bayesian regularization, sampling, frequent parameters update, and robustness in any combinations. Well- known models PLSA, LDA, CVB0, SWB, and many others can be considered as special cases of the proposed broad family of models. We propose the robust PLSA model and show that it is more sparse and performs better that regularized models like LDA.
Citation:
K. V. Vorontsov, A. A. Potapenko, “Regularization, robustness and sparsity of probabilistic topic models”, Computer Research and Modeling, 4:4 (2012), 693–706