|
This article is cited in 1 scientific paper (total in 1 paper)
Adaptation to inexactness for some gradient-type optimization methods
F. S. Stonyakin Crimea Federal University, Simferopol
Abstract:
We introduce a notion of inexact model of a convex objective function, which allows for errors both in the function and in its gradient. For this situation, a gradient method with an adaptive adjustment of some parameters of the model is proposed and an estimate for the convergence rate is found. This estimate is optimal on a class of sufficiently smooth problems in the presence of errors. We consider a special class of convex nonsmooth optimization problems. In order to apply the proposed technique to this class, an artificial error should be introduced. We show that the method can be modified for such problems to guarantee a convergence in the function with a nearly optimal rate on the class of convex nonsmooth optimization problems. An adaptive gradient method is proposed for objective functions with some relaxation of the Lipschitz condition for the gradient that satisfy the Polyak–Lojasievicz gradient dominance condition. Here, the objective function and its gradient can be given inexactly. The adaptive choice of the parameters is performed during the operation of the method with respect to both the Lipschitz constant of the gradient and a value corresponding to the error of the gradient and the objective function. The linear convergence of the method is justified up to a value associated with the errors.
Keywords:
gradient method, adaptive method, Lipschitz gradient, nonsmooth optimization, gradient dominance condition.
Received: 08.09.2019 Revised: 21.10.2019 Accepted: 28.10.2019
Citation:
F. S. Stonyakin, “Adaptation to inexactness for some gradient-type optimization methods”, Trudy Inst. Mat. i Mekh. UrO RAN, 25, no. 4, 2019, 210–225
Linking options:
https://www.mathnet.ru/eng/timm1687 https://www.mathnet.ru/eng/timm/v25/i4/p210
|
|