Informatika i Ee Primeneniya [Informatics and its Applications]
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive
Impact factor

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Inform. Primen.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Informatika i Ee Primeneniya [Informatics and its Applications], 2020, Volume 14, Issue 4, Pages 55–62
DOI: https://doi.org/10.14357/19922264200408
(Mi ia697)
 

This article is cited in 1 scientific paper (total in 1 paper)

Deep learning neural network structure optimization

M. S. Potanina, K. O. Vaysera, V. A. Zholobova, V. V. Strijovba

a Moscow Institute of Physics and Technology, 9 Institutskiy Per., Dolgoprudny, Moscow Region 141700, Russian Federation
b A. A. Dorodnicyn Computing Center, Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 40 Vavilov Str., Moscow 119333, Russian Federation
Full-text PDF (349 kB) Citations (1)
References:
Abstract: The paper investigates the optimal model structure selection problem. The model is a superposition of generalized linear models. Its elements are linear regression, logistic regression, principal components analysis, autoencoder and neural network. The model structure refers to values of structural parameters that determine the form of final superposition. This paper analyzes the model structure selection method and investigates dependence of accuracy, complexity and stability of the model on it. The paper proposes an algorithm for selection of the neural network optimal structure. The proposed method was tested on real and synthetic data. The experiment resulted in significant structural complexity reduction of the model while maintaining accuracy of approximation.
Keywords: model selection, linear models, autoencoders, neural networks, structure, genetic alghorithm.
Funding agency Grant number
Russian Foundation for Basic Research 19-07-01155
19-07-00885
Ministry of Science and Higher Education of the Russian Federation 05.Y09.21.0018
13/1251/2018
This research was supported by RFBR (projects 19-07-1155 and 19-07-0885) and by the Government of the Russian Federation (agreement 05.Y09.21.0018). This paper contains results of the project “Statistical methods of machine learning” which is carried out within the framework of the Program “Center of Big Data Storage and Analysis” of the National Technology Initiative Competence Center. It is supported by the Ministry of Science and Higher Education of the Russian Federation according to the agreement between the M. V. Lomonosov Moscow State University and the Foundation of Project Support of the National Technology Initiative from 11.12.2018 No. 13/1251/2018.
Received: 02.12.2019
Document Type: Article
Language: Russian
Citation: M. S. Potanin, K. O. Vayser, V. A. Zholobov, V. V. Strijov, “Deep learning neural network structure optimization”, Inform. Primen., 14:4 (2020), 55–62
Citation in format AMSBIB
\Bibitem{PotVayZho20}
\by M.~S.~Potanin, K.~O.~Vayser, V.~A.~Zholobov, V.~V.~Strijov
\paper Deep learning neural network structure optimization
\jour Inform. Primen.
\yr 2020
\vol 14
\issue 4
\pages 55--62
\mathnet{http://mi.mathnet.ru/ia697}
\crossref{https://doi.org/10.14357/19922264200408}
Linking options:
  • https://www.mathnet.ru/eng/ia697
  • https://www.mathnet.ru/eng/ia/v14/i4/p55
  • This publication is cited in the following 1 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Èíôîðìàòèêà è å¸ ïðèìåíåíèÿ
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024