|
MATHEMATICAL MODELING AND NUMERICAL SIMULATION
Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
P. A. Ostroukhovab, R. A. Kamalovac, P. E. Dvurechenskiid, A. V. Gasnikovabe a Moscow Institute of Physics and Technology,
9 Institutskiy per., Dolgoprudny, Moscow Region, 141701, Russia
b Institute for Information Transmission Problems of Russian Academy of Sciences,
19/1 Bolshoy Karetny per., Moscow, 127051, Russia
c V. A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences,
65 Profsoyuznaya st., Moscow, 117997, Russia
d Weierstrass Institute for Applied Analysis and Stochastics,
39 Mohrenstraße, Berlin, 10117, Germany
e Caucasus Mathematical Center, Adyghe State University,
208 Pervomayskaya st., Maykop, Adyghe, 385000, Russia
Abstract:
In this paper we propose high-order (tensor) methods for two types of saddle point problems. Firstly, we consider the classic min-max saddle point problem. Secondly, we consider the search for a stationary point of the saddle point problem objective by its gradient norm minimization. Obviously, the stationary point does not always coincide with the optimal point. However, if we have a linear optimization problem with linear constraints, the algorithm for gradient norm minimization becomes useful. In this case we can reconstruct the solution of the optimization problem of a primal function from the solution of gradient norm minimization of dual function. In this paper we consider both types of problems with no constraints. Additionally, we assume that the objective function is $\mu$-strongly convex by the first argument, $\mu$-strongly concave by the second argument, and that the $p$-th derivative of the objective is Lipschitz-continous.
For min-max problems we propose two algorithms. Since we consider strongly convex a strongly concave problem, the first algorithm uses the existing tensor method for regular convex concave saddle point problems and accelerates it with the restarts technique. The complexity of such an algorithm is linear. If we additionally assume that our objective is first and second order Lipschitz, we can improve its performance even more. To do this, we can switch to another existing algorithm in its area of quadratic convergence. Thus, we get the second algorithm, which has a global linear convergence rate and a local quadratic convergence rate.
Finally, in convex optimization there exists a special methodology to solve gradient norm minimization problems by tensor methods. Its main idea is to use existing (near-)optimal algorithms inside a special framework. I want to emphasize that inside this framework we do not necessarily need the assumptions of strong convexity, because we can regularize the convex objective in a special way to make it strongly convex. In our article we transfer this framework on convex-concave objective functions and use it with our a forementioned algorithm with a global linear convergence and a local quadratic convergence rate.
Since the saddle point problem is a particular case of the monotone variation inequality problem, the proposed methods will also work in solving strongly monotone variational inequality problems.
Keywords:
variational inequality, saddle point problem, high-order smoothness, tensor methods, gradient norm minimization.
Received: 12.02.2022 Accepted: 13.02.2022
Citation:
P. A. Ostroukhov, R. A. Kamalov, P. E. Dvurechenskii, A. V. Gasnikov, “Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities”, Computer Research and Modeling, 14:2 (2022), 357–376
Linking options:
https://www.mathnet.ru/eng/crm973 https://www.mathnet.ru/eng/crm/v14/i2/p357
|
Statistics & downloads: |
Abstract page: | 129 | Full-text PDF : | 56 | References: | 28 |
|