35 citations to https://www.mathnet.ru/rus/at14682
-
Aleksandr Lobanov, Lecture Notes in Computer Science, 14395, Optimization and Applications, 2023, 60
-
Nikita Kornilov, Alexander Gasnikov, Pavel Dvurechensky, Darina Dvinskikh, “Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact”, Comput Manag Sci, 20:1 (2023)
-
Raghu Bollapragada, Stefan M. Wild, “Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization”, Math. Prog. Comp., 15:2 (2023), 327
-
Balasubramanian K., Ghadimi S., “Zeroth-Order Nonconvex Stochastic Optimization: Handling Constraints, High Dimensionality, and Saddle Points”, Found. Comput. Math., 22:1 (2022), 35–76
-
A. I. Bazarova, A. N. Beznosikov, A. V. Gasnikov, “Linearly convergent gradient-free methods for minimization of parabolic approximation”, Компьютерные исследования и моделирование, 14:2 (2022), 239–255
-
Abhishek Roy, Lingqing Shen, Krishnakumar Balasubramanian, Saeed Ghadimi, “Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference”, Bernoulli, 28:3 (2022)
-
Vasilii Novitskii, Alexander Gasnikov, “Improved exploitation of higher order smoothness in derivative-free optimization”, Optim Lett, 16:7 (2022), 2059
-
Darina Dvinskikh, Vladislav Tominin, Iaroslav Tominin, Alexander Gasnikov, Lecture Notes in Computer Science, 13367, Mathematical Optimization Theory and Operations Research, 2022, 18
-
Eduard Gorbunov, Pavel Dvurechensky, Alexander Gasnikov, “An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization”, SIAM J. Optim., 32:2 (2022), 1210
-
Yan Zhang, Yi Zhou, Kaiyi Ji, Michael M. Zavlanos, “A new one-point residual-feedback oracle for black-box learning and control”, Automatica, 136 (2022), 110006