Vestnik Tomskogo Gosudarstvennogo Universiteta. Matematika i Mekhanika
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive
Impact factor

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Vestn. Tomsk. Gos. Univ. Mat. Mekh.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Vestnik Tomskogo Gosudarstvennogo Universiteta. Matematika i Mekhanika, 2018, Number 55, Pages 22–37
DOI: https://doi.org/10.17223/19988621/55/3
(Mi vtgu668)
 

This article is cited in 3 scientific papers (total in 3 papers)

MATHEMATICS

On the convergence rate of the subgradient method with metric variation and its applications in neural network approximation schemes

V. N. Krutikov, N. S. Samoilenko

Kemerovo State University
Full-text PDF (479 kB) Citations (3)
References:
Abstract: In this paper, the relaxation subgradient method with rank 2 correction of metric matrices is studied. It is proven that, on high-convex functions, in the case of the existence of a linear coordinate transformation reducing the degree of the task casualty, the method has a linear convergence rate corresponding to the casualty degree. The paper offers a new efficient tool for choosing the initial approximation of an artificial neural network. The use of regularization allowed excluding the overfitting effect and efficiently deleting low-significant neurons and intra-neural connections. The ability to efficiently solve such problems is ensured by the use of the subgradient method with metric matrix rank 2 correction. It has been experimentally proved that the convergence rate of the quasi-Newton method and that of the method under research are virtually equivalent on smooth functions. The method has a high convergence rate on non-smooth functions as well. The method's computing capabilities are used to build efficient neural network learning algorithms. The paper describes an artificial neural network learning algorithm which, together with the redundant neuron suppression, allows obtaining reliable approximations in one count.
Keywords: method, subgradient, minimization, rate of convergence, neural networks, regularization.
Received: 31.03.2018
Bibliographic databases:
Document Type: Article
UDC: 519.6
MSC: 65K05, 90C30, 82C32
Language: Russian
Citation: V. N. Krutikov, N. S. Samoilenko, “On the convergence rate of the subgradient method with metric variation and its applications in neural network approximation schemes”, Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2018, no. 55, 22–37
Citation in format AMSBIB
\Bibitem{KruSam18}
\by V.~N.~Krutikov, N.~S.~Samoilenko
\paper On the convergence rate of the subgradient method with metric variation and its applications in neural network approximation schemes
\jour Vestn. Tomsk. Gos. Univ. Mat. Mekh.
\yr 2018
\issue 55
\pages 22--37
\mathnet{http://mi.mathnet.ru/vtgu668}
\crossref{https://doi.org/10.17223/19988621/55/3}
\elib{https://elibrary.ru/item.asp?id=36387800}
Linking options:
  • https://www.mathnet.ru/eng/vtgu668
  • https://www.mathnet.ru/eng/vtgu/y2018/i55/p22
  • This publication is cited in the following 3 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Вестник Томского государственного университета. Математика и механика
    Statistics & downloads:
    Abstract page:259
    Full-text PDF :89
    References:43
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024