Informatics and Automation
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Informatics and Automation:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Informatics and Automation, 2022, Issue 21, volume 3, Pages 521–542
DOI: https://doi.org/10.15622/ia.21.3.3
(Mi trspy1199)
 

Artificial Intelligence, Knowledge and Data Engineering

Experimental study of language models of "transformer" in the problem of finding the answer to a question in a russian-language text

D. Galeev, V. Panishchev

Southwest State University (SWSU)
Abstract: The aim of the study is to obtain a more lightweight language model that is comparable in terms of EM and F1 with the best modern language models in the task of finding the answer to a question in a text in Russian. The results of the work can be used in various question-and-answer systems for which response time is important. Since the lighter model has fewer parameters than the original one, it can be used on less powerful computing devices, including mobile devices. In this paper, methods of natural language processing, machine learning, and the theory of artificial neural networks are used. The neural network is configured and trained using the Torch and Hugging face machine learning libraries. In the work, the DistilBERT model was trained on the SberQUAD dataset with and without distillation. The work of the received models is compared. The distilled DistilBERT model (EM 58,57 and F1 78,42) was able to outperform the results of the larger ruGPT-3-medium generative network (EM 57,60 and F1 77,73), despite the fact that ruGPT-3-medium had 6,5 times more parameters. The model also showed better EM and F1 metrics than the same model, but to which only conventional training without distillation was applied (EM 55,65, F1 76,51). Unfortunately, the resulting model lags further behind the larger robert discriminative model (EM 66,83, F1 84,95), which has 3,2 times more parameters. The application of the DistilBERT model in question-and-answer systems in Russian is substantiated. Directions for further research are proposed.
Keywords: machine learning, deep learning, neural networks, natural language processing, transformer.
Received: 09.02.2022
Document Type: Article
UDC: 004.912
Language: Russian
Citation: D. Galeev, V. Panishchev, “Experimental study of language models of "transformer" in the problem of finding the answer to a question in a russian-language text”, Informatics and Automation, 21:3 (2022), 521–542
Citation in format AMSBIB
\Bibitem{GalPan22}
\by D.~Galeev, V.~Panishchev
\paper Experimental study of language models of "transformer" in the problem of finding the answer to a question in a russian-language text
\jour Informatics and Automation
\yr 2022
\vol 21
\issue 3
\pages 521--542
\mathnet{http://mi.mathnet.ru/trspy1199}
\crossref{https://doi.org/10.15622/ia.21.3.3}
Linking options:
  • https://www.mathnet.ru/eng/trspy1199
  • https://www.mathnet.ru/eng/trspy/v21/i3/p521
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Informatics and Automation
    Statistics & downloads:
    Abstract page:126
    Full-text PDF :156
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024