Vestnik Yuzhno-Ural'skogo Universiteta. Seriya Matematicheskoe Modelirovanie i Programmirovanie
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Vestnik YuUrGU. Ser. Mat. Model. Progr.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Vestnik Yuzhno-Ural'skogo Universiteta. Seriya Matematicheskoe Modelirovanie i Programmirovanie, 2020, Volume 13, Issue 1, Pages 118–128
DOI: https://doi.org/10.14529/mmp200109
(Mi vyuru535)
 

This article is cited in 4 scientific papers (total in 4 papers)

Programming and Computer Software

Special aspects of matrix operation implementations for low-precision neural network model on the Elbrus platform

E. E. Limonovaab, M. I. Neiman-zadec, V. L. Arlazarova

a Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, Moscow, Russian Federation
b Smart Engines Service LLC, Moscow, Russian Federation
c JSC “MCST”, Moscow, Russian Federation
Full-text PDF (463 kB) Citations (4)
References:
Abstract: This paper investigates the possibility of effective implementation of calculations in low-precision neural network models on the Elbrus platform with the VLIW architecture. Such models are widely used in practice to increase the computational efficiency of recognition and well suit computers with the x86 and ARM architectures. In this paper, we consider an 8-bit neural network model, in which matrix multiplication is the most resource-intensive part of the implementation. This paper presents an effective implementation of matrix multiplication that takes into account the features of the Elbrus architecture: the presence of several computational channels with various arithmetic and logic devices, an array prefetch buffer, and its own SIMD extension. We carry out theoretical and experimental comparisons of the computational efficiency of low-precision and classical neural network models, which show that Elbrus processors have much more capabilities for performing fast floating point calculations and require the development of new approaches to increase the computational efficiency of neural network models.
Keywords: low-precision neural networks, computational efficiency, Elbrus architecture, matrix operations.
Received: 07.10.2019
Document Type: Article
UDC: 004.93
MSC: 68T10
Language: English
Citation: E. E. Limonova, M. I. Neiman-zade, V. L. Arlazarov, “Special aspects of matrix operation implementations for low-precision neural network model on the Elbrus platform”, Vestnik YuUrGU. Ser. Mat. Model. Progr., 13:1 (2020), 118–128
Citation in format AMSBIB
\Bibitem{LimNeiArl20}
\by E.~E.~Limonova, M.~I.~Neiman-zade, V.~L.~Arlazarov
\paper Special aspects of matrix operation implementations for low-precision neural network model on the Elbrus platform
\jour Vestnik YuUrGU. Ser. Mat. Model. Progr.
\yr 2020
\vol 13
\issue 1
\pages 118--128
\mathnet{http://mi.mathnet.ru/vyuru535}
\crossref{https://doi.org/10.14529/mmp200109}
Linking options:
  • https://www.mathnet.ru/eng/vyuru535
  • https://www.mathnet.ru/eng/vyuru/v13/i1/p118
  • This publication is cited in the following 4 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Statistics & downloads:
    Abstract page:112
    Full-text PDF :43
    References:10
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024