Computer Research and Modeling
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Computer Research and Modeling:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Computer Research and Modeling, 2022, Volume 14, Issue 2, Pages 357–376
DOI: https://doi.org/10.20537/2076-7633-2022-14-2-357-376
(Mi crm973)
 

MATHEMATICAL MODELING AND NUMERICAL SIMULATION

Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities

P. A. Ostroukhovab, R. A. Kamalovac, P. E. Dvurechenskiid, A. V. Gasnikovabe

a Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region, 141701, Russia
b Institute for Information Transmission Problems of Russian Academy of Sciences, 19/1 Bolshoy Karetny per., Moscow, 127051, Russia
c V. A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, 65 Profsoyuznaya st., Moscow, 117997, Russia
d Weierstrass Institute for Applied Analysis and Stochastics, 39 Mohrenstraße, Berlin, 10117, Germany
e Caucasus Mathematical Center, Adyghe State University, 208 Pervomayskaya st., Maykop, Adyghe, 385000, Russia
References:
Abstract: In this paper we propose high-order (tensor) methods for two types of saddle point problems. Firstly, we consider the classic min-max saddle point problem. Secondly, we consider the search for a stationary point of the saddle point problem objective by its gradient norm minimization. Obviously, the stationary point does not always coincide with the optimal point. However, if we have a linear optimization problem with linear constraints, the algorithm for gradient norm minimization becomes useful. In this case we can reconstruct the solution of the optimization problem of a primal function from the solution of gradient norm minimization of dual function. In this paper we consider both types of problems with no constraints. Additionally, we assume that the objective function is $\mu$-strongly convex by the first argument, $\mu$-strongly concave by the second argument, and that the $p$-th derivative of the objective is Lipschitz-continous.
For min-max problems we propose two algorithms. Since we consider strongly convex a strongly concave problem, the first algorithm uses the existing tensor method for regular convex concave saddle point problems and accelerates it with the restarts technique. The complexity of such an algorithm is linear. If we additionally assume that our objective is first and second order Lipschitz, we can improve its performance even more. To do this, we can switch to another existing algorithm in its area of quadratic convergence. Thus, we get the second algorithm, which has a global linear convergence rate and a local quadratic convergence rate.
Finally, in convex optimization there exists a special methodology to solve gradient norm minimization problems by tensor methods. Its main idea is to use existing (near-)optimal algorithms inside a special framework. I want to emphasize that inside this framework we do not necessarily need the assumptions of strong convexity, because we can regularize the convex objective in a special way to make it strongly convex. In our article we transfer this framework on convex-concave objective functions and use it with our a forementioned algorithm with a global linear convergence and a local quadratic convergence rate.
Since the saddle point problem is a particular case of the monotone variation inequality problem, the proposed methods will also work in solving strongly monotone variational inequality problems.
Keywords: variational inequality, saddle point problem, high-order smoothness, tensor methods, gradient norm minimization.
Funding agency Grant number
Ministry of Science and Higher Education of the Russian Federation 0714-2020-0005
The research of P. Ostroukhov and A. Gasnikov is supported by the Ministry of Science and Higher Education of the Russian Federation (Goszadaniye), No. 075-00337-20-03, project No. 0714-2020-0005.
Received: 12.02.2022
Accepted: 13.02.2022
Document Type: Article
UDC: 519.853.62
Language: English
Citation: P. A. Ostroukhov, R. A. Kamalov, P. E. Dvurechenskii, A. V. Gasnikov, “Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities”, Computer Research and Modeling, 14:2 (2022), 357–376
Citation in format AMSBIB
\Bibitem{OstKamDvu22}
\by P.~A.~Ostroukhov, R.~A.~Kamalov, P.~E.~Dvurechenskii, A.~V.~Gasnikov
\paper Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
\jour Computer Research and Modeling
\yr 2022
\vol 14
\issue 2
\pages 357--376
\mathnet{http://mi.mathnet.ru/crm973}
\crossref{https://doi.org/10.20537/2076-7633-2022-14-2-357-376}
Linking options:
  • https://www.mathnet.ru/eng/crm973
  • https://www.mathnet.ru/eng/crm/v14/i2/p357
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Computer Research and Modeling
    Statistics & downloads:
    Abstract page:111
    Full-text PDF :46
    References:20
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024