|
Zapiski Nauchnykh Seminarov POMI, 2023, Volume 529, Pages 54–71
(Mi znsl7419)
|
|
|
|
Monolingual and cross-lingual knowledge transfer for topic classification
D. Karpova, M. Burtsevb a Moscow Institute of Physics and Technology, Dolgoprudny, Russia
b London Institute for Mathematical Sciences, London, United Kingdom
Abstract:
In this work, we investigate knowledge transfer from the RuQTopics dataset. This Russian topical dataset combines a large number of data points (361,560 single-label, 170,930 multi-label) with extensive class coverage (76 classes). We have prepared this dataset from the “Yandex Que” raw data. By evaluating the models trained on RuQTopics on the six matching classes from the Russian MASSIVE subset, we show that the RuQTopics dataset is suitable for real-world conversational tasks, as Russian-only models trained on this dataset consistently yield an accuracy around 85% on this subset. We have also found that for the multilingual BERT trained on RuQTopics and evaluated on the same six classes of MASSIVE (for all MASSIVE languages), the language-wise accuracy closely correlates (Spearman correlation 0.773 with p-value 2.997e-11) with the approximate size of BERT pretraining data for the corresponding language. At the same time, the correlation of language-wise accuracy with the linguistic distance from the Russian language is not statistically significant.
Key words and phrases:
dataset, topic classification, knowledge transfer, cross-lingual knowledge transfer.
Received: 06.09.2023
Citation:
D. Karpov, M. Burtsev, “Monolingual and cross-lingual knowledge transfer for topic classification”, Investigations on applied mathematics and informatics. Part II–1, Zap. Nauchn. Sem. POMI, 529, POMI, St. Petersburg, 2023, 54–71
Linking options:
https://www.mathnet.ru/eng/znsl7419 https://www.mathnet.ru/eng/znsl/v529/p54
|
Statistics & downloads: |
Abstract page: | 33 | Full-text PDF : | 10 | References: | 16 |
|