Abstract:
In this paper, convex stochastic optimization problems under different assumptions on the properties of the available stochastic subgradients are considered. It is known that if a value of the objective function is available, one can obtain, in parallel, several independent approximate solutions in terms of the objective residual expectation. Then, choosing a solution with the minimum function value, one can control the probability of large deviations of the objective residual. On the contrary, in this short paper we address the situation when the objective function value is unavailable or is too expensive to calculate. Under the “light-tail” assumption for stochastic subgradients and in the general case with a moderate probability of large deviations, it is shown that parallelization combined with averaging gives bounds for the probability of large deviations similar to those of a serial method. Thus, in these cases one can benefit from parallel computations and reduce the computational time without any loss in the solution quality.
Key words:
stochastic convex optimization, probability of large deviation, mirror descent, parallel algorithm.
Citation:
P. Dvurechensky, A. Gasnikov, A. Lagunovskaya, “Parallel algorithms and probability of large deviation for stochastic convex optimization problems”, Sib. Zh. Vychisl. Mat., 21:1 (2018), 47–53; Num. Anal. Appl., 11:1 (2018), 33–37
\Bibitem{DvuGasLag18}
\by P.~Dvurechensky, A.~Gasnikov, A.~Lagunovskaya
\paper Parallel algorithms and probability of large deviation for stochastic convex optimization problems
\jour Sib. Zh. Vychisl. Mat.
\yr 2018
\vol 21
\issue 1
\pages 47--53
\mathnet{http://mi.mathnet.ru/sjvm667}
\crossref{https://doi.org/10.15372/SJNM20180103}
\elib{https://elibrary.ru/item.asp?id=32466478}
\transl
\jour Num. Anal. Appl.
\yr 2018
\vol 11
\issue 1
\pages 33--37
\crossref{https://doi.org/10.1134/S1995423918010044}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=000427431900003}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85043721267}
Linking options:
https://www.mathnet.ru/eng/sjvm667
https://www.mathnet.ru/eng/sjvm/v21/i1/p47
This publication is cited in the following 8 articles:
Eduard Gorbunov, Pavel Dvurechensky, Alexander Gasnikov, “An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization”, SIAM J. Optim., 32:2 (2022), 1210
Ivanova A., Dvurechensky P., Gasnikov A., Kamzolov D., “Composite Optimization For the Resource Allocation Problem”, Optim. Method Softw., 36:4 (2021), 720–754
D. Dvinskikh, A. Gasnikov, “Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems”, J. Inverse Ill-Posed Probl., 29:3 (2021), 385–405
P. Dvurechensky, E. Gorbunov, A. Gasnikov, “An accelerated directional derivative method for smooth stochastic convex optimization”, Eur. J. Oper. Res., 290:2 (2021), 601–621
Alexander Rogozin, Mikhail Bochko, Pavel Dvurechensky, Alexander Gasnikov, Vladislav Lukoshkin, 2021 60th IEEE Conference on Decision and Control (CDC), 2021, 3367
Artem Agafonov, Pavel Dvurechensky, Gesualdo Scutari, Alexander Gasnikov, Dmitry Kamzolov, Aleksandr Lukashevich, Amir Daneshmand, 2021 60th IEEE Conference on Decision and Control (CDC), 2021, 2407
Darina Dvinskikh, Aleksandr Ogaltsov, Alexander Gasnikov, Pavel Dvurechensky, Vladimir Spokoiny, “On the line-search gradient methods for stochastic optimization”, IFAC-PapersOnLine, 53:2 (2020), 1715
Darina Dvinskikh, Eduard Gorbunov, Alexander Gasnikov, Pavel Dvurechensky, Cesar A. Uribe, 2019 IEEE 58th Conference on Decision and Control (CDC), 2019, 7435