|
This article is cited in 2 scientific papers (total in 2 papers)
Architecture of a machine translation system
V. A. Nuriev Institute of Informatics Problems, Federal Research Center “Computer Science and Control” of the Russian
Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract:
The paper describes architecture of a Neural Machine Translation (NMT) system. The
subject is brought up since NMT, i.e., translation using artificial neural networks, is now a leading
Machine Translation paradigm.
The NMT systems manage to deliver much better quality of output than
the machine translators of the previous generation (statistical translation systems) do. Yet, the
translation they produce still may contain various errors and it is relatively inaccurate compared
with human translations. Therefore, to improve its quality,
it is important to see more clearly how an
NMT system is built and works. Commonly, its architecture consists of two
recurrent neural networks, one to get the input text sequence and the other to generate translated
output (text sequence). The NMT system often has an attention mechanism helping it cope with long
input sequences. As an example, Google's NMT system is taken as the Google Translate service is
one of the most highly demanded today, it processes around 143 billion words in more
than 100 languages per day. The paper concludes with some perspectives for future research.
Keywords:
neural machine translation, artificial neural networks, recurrent neural networks, attention mechanism, architecture of a machine translation system, Google's Neural Machine Translation system.
Received: 30.06.2019
Citation:
V. A. Nuriev, “Architecture of a machine translation system”, Inform. Primen., 13:3 (2019), 90–96
Linking options:
https://www.mathnet.ru/eng/ia614 https://www.mathnet.ru/eng/ia/v13/i3/p90
|
Statistics & downloads: |
Abstract page: | 823 | Full-text PDF : | 528 | References: | 45 |
|