Trudy Instituta Matematiki i Mekhaniki UrO RAN
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive
Impact factor

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Trudy Inst. Mat. i Mekh. UrO RAN:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Trudy Instituta Matematiki i Mekhaniki UrO RAN, 2019, Volume 25, Number 4, Pages 210–225
DOI: https://doi.org/10.21538/0134-4889-2019-25-4-210-225
(Mi timm1687)
 

This article is cited in 1 scientific paper (total in 1 paper)

Adaptation to inexactness for some gradient-type optimization methods

F. S. Stonyakin

Crimea Federal University, Simferopol
Full-text PDF (262 kB) Citations (1)
References:
Abstract: We introduce a notion of inexact model of a convex objective function, which allows for errors both in the function and in its gradient. For this situation, a gradient method with an adaptive adjustment of some parameters of the model is proposed and an estimate for the convergence rate is found. This estimate is optimal on a class of sufficiently smooth problems in the presence of errors. We consider a special class of convex nonsmooth optimization problems. In order to apply the proposed technique to this class, an artificial error should be introduced. We show that the method can be modified for such problems to guarantee a convergence in the function with a nearly optimal rate on the class of convex nonsmooth optimization problems. An adaptive gradient method is proposed for objective functions with some relaxation of the Lipschitz condition for the gradient that satisfy the Polyak–Lojasievicz gradient dominance condition. Here, the objective function and its gradient can be given inexactly. The adaptive choice of the parameters is performed during the operation of the method with respect to both the Lipschitz constant of the gradient and a value corresponding to the error of the gradient and the objective function. The linear convergence of the method is justified up to a value associated with the errors.
Keywords: gradient method, adaptive method, Lipschitz gradient, nonsmooth optimization, gradient dominance condition.
Funding agency Grant number
Russian Science Foundation 18-71-00048
This work was supported by the Russian Science Foundation (project no. 18-71-00048).
Received: 08.09.2019
Revised: 21.10.2019
Accepted: 28.10.2019
Bibliographic databases:
Document Type: Article
UDC: 519.85
Language: Russian
Citation: F. S. Stonyakin, “Adaptation to inexactness for some gradient-type optimization methods”, Trudy Inst. Mat. i Mekh. UrO RAN, 25, no. 4, 2019, 210–225
Citation in format AMSBIB
\Bibitem{Sto19}
\by F.~S.~Stonyakin
\paper Adaptation to inexactness for some gradient-type optimization methods
\serial Trudy Inst. Mat. i Mekh. UrO RAN
\yr 2019
\vol 25
\issue 4
\pages 210--225
\mathnet{http://mi.mathnet.ru/timm1687}
\crossref{https://doi.org/10.21538/0134-4889-2019-25-4-210-225}
\elib{https://elibrary.ru/item.asp?id=41455538}
Linking options:
  • https://www.mathnet.ru/eng/timm1687
  • https://www.mathnet.ru/eng/timm/v25/i4/p210
  • This publication is cited in the following 1 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Trudy Instituta Matematiki i Mekhaniki UrO RAN
    Statistics & downloads:
    Abstract page:287
    Full-text PDF :77
    References:38
    First page:1
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024