Loading [MathJax]/jax/output/SVG/config.js
Sibirskii Zhurnal Vychislitel'noi Matematiki
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive
Impact factor

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Sib. Zh. Vychisl. Mat.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sibirskii Zhurnal Vychislitel'noi Matematiki, 2018, Volume 21, Number 1, Pages 47–53
DOI: https://doi.org/10.15372/SJNM20180103
(Mi sjvm667)
 

This article is cited in 8 scientific papers (total in 8 papers)

Parallel algorithms and probability of large deviation for stochastic convex optimization problems

P. Dvurechenskyab, A. Gasnikovbc, A. Lagunovskayac

a Weierstrass Institute for Applied Analysis and Stochastics, 39 Mohrenstr., Berlin, 10117, Germany
b Institute for Information Transmission Problems RAS, 19, build. 1, Bolshoy Karetny per., Moscow, 127051, Russia
c Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, 141700, Russia
Full-text PDF (496 kB) Citations (8)
References:
Abstract: In this paper, convex stochastic optimization problems under different assumptions on the properties of the available stochastic subgradients are considered. It is known that if a value of the objective function is available, one can obtain, in parallel, several independent approximate solutions in terms of the objective residual expectation. Then, choosing a solution with the minimum function value, one can control the probability of large deviations of the objective residual. On the contrary, in this short paper we address the situation when the objective function value is unavailable or is too expensive to calculate. Under the “light-tail” assumption for stochastic subgradients and in the general case with a moderate probability of large deviations, it is shown that parallelization combined with averaging gives bounds for the probability of large deviations similar to those of a serial method. Thus, in these cases one can benefit from parallel computations and reduce the computational time without any loss in the solution quality.
Key words: stochastic convex optimization, probability of large deviation, mirror descent, parallel algorithm.
Funding agency Grant number
Russian Science Foundation 14-50-00150
Ministry of Education and Science of the Russian Federation МК-1806.2017.9
Received: 24.01.2017
Revised: 07.07.2017
English version:
Numerical Analysis and Applications, 2018, Volume 11, Issue 1, Pages 33–37
DOI: https://doi.org/10.1134/S1995423918010044
Bibliographic databases:
Document Type: Article
UDC: 519.856+519.856.3
Language: Russian
Citation: P. Dvurechensky, A. Gasnikov, A. Lagunovskaya, “Parallel algorithms and probability of large deviation for stochastic convex optimization problems”, Sib. Zh. Vychisl. Mat., 21:1 (2018), 47–53; Num. Anal. Appl., 11:1 (2018), 33–37
Citation in format AMSBIB
\Bibitem{DvuGasLag18}
\by P.~Dvurechensky, A.~Gasnikov, A.~Lagunovskaya
\paper Parallel algorithms and probability of large deviation for stochastic convex optimization problems
\jour Sib. Zh. Vychisl. Mat.
\yr 2018
\vol 21
\issue 1
\pages 47--53
\mathnet{http://mi.mathnet.ru/sjvm667}
\crossref{https://doi.org/10.15372/SJNM20180103}
\elib{https://elibrary.ru/item.asp?id=32466478}
\transl
\jour Num. Anal. Appl.
\yr 2018
\vol 11
\issue 1
\pages 33--37
\crossref{https://doi.org/10.1134/S1995423918010044}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=000427431900003}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85043721267}
Linking options:
  • https://www.mathnet.ru/eng/sjvm667
  • https://www.mathnet.ru/eng/sjvm/v21/i1/p47
  • This publication is cited in the following 8 articles:
    1. Eduard Gorbunov, Pavel Dvurechensky, Alexander Gasnikov, “An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization”, SIAM J. Optim., 32:2 (2022), 1210  crossref
    2. Ivanova A., Dvurechensky P., Gasnikov A., Kamzolov D., “Composite Optimization For the Resource Allocation Problem”, Optim. Method Softw., 36:4 (2021), 720–754  crossref  isi  scopus
    3. D. Dvinskikh, A. Gasnikov, “Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems”, J. Inverse Ill-Posed Probl., 29:3 (2021), 385–405  crossref  mathscinet  isi  scopus
    4. P. Dvurechensky, E. Gorbunov, A. Gasnikov, “An accelerated directional derivative method for smooth stochastic convex optimization”, Eur. J. Oper. Res., 290:2 (2021), 601–621  crossref  mathscinet  isi  scopus
    5. Alexander Rogozin, Mikhail Bochko, Pavel Dvurechensky, Alexander Gasnikov, Vladislav Lukoshkin, 2021 60th IEEE Conference on Decision and Control (CDC), 2021, 3367  crossref
    6. Artem Agafonov, Pavel Dvurechensky, Gesualdo Scutari, Alexander Gasnikov, Dmitry Kamzolov, Aleksandr Lukashevich, Amir Daneshmand, 2021 60th IEEE Conference on Decision and Control (CDC), 2021, 2407  crossref
    7. Darina Dvinskikh, Aleksandr Ogaltsov, Alexander Gasnikov, Pavel Dvurechensky, Vladimir Spokoiny, “On the line-search gradient methods for stochastic optimization”, IFAC-PapersOnLine, 53:2 (2020), 1715  crossref
    8. Darina Dvinskikh, Eduard Gorbunov, Alexander Gasnikov, Pavel Dvurechensky, Cesar A. Uribe, 2019 IEEE 58th Conference on Decision and Control (CDC), 2019, 7435  crossref
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Sibirskii Zhurnal Vychislitel'noi Matematiki
    Statistics & downloads:
    Abstract page:362
    Full-text PDF :69
    References:53
    First page:11
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2025