Успехи математических наук
RUS  ENG    ЖУРНАЛЫ   ПЕРСОНАЛИИ   ОРГАНИЗАЦИИ   КОНФЕРЕНЦИИ   СЕМИНАРЫ   ВИДЕОТЕКА   ПАКЕТ AMSBIB  
Общая информация
Последний выпуск
Архив
Импакт-фактор
Правила для авторов
Загрузить рукопись
Историческая справка

Поиск публикаций
Поиск ссылок

RSS
Последний выпуск
Текущие выпуски
Архивные выпуски
Что такое RSS



УМН:
Год:
Том:
Выпуск:
Страница:
Найти






Персональный вход:
Логин:
Пароль:
Запомнить пароль
Войти
Забыли пароль?
Регистрация


Успехи математических наук, 2024, том 79, выпуск 6(480), страницы 5–38
DOI: https://doi.org/10.4213/rm10206
(Mi rm10206)
 

Accelerated Stochastic ExtraGradient: Mixing Hessian and gradient similarity to reduce communication in distributed and federated learning

D. A. Bylinkinab, K. D. Degtyareva, A. N. Beznosikovbcd

a Moscow Institute of Physics and Technology (National Research University), Moscow, Russia
b Ivannikov Institute for System Programming of the Russian Academy of Sciences, Moscow, Russia
c Sber AI Lab, Moscow, Russia
d Innopolis University, Innopolis, Russia
Список литературы:
Аннотация: Modern realities and trends in learning require more and more generalization ability of models, which leads to an increase in both models and training sample size. It is already difficult to solve such tasks in a single device mode. This is the reason why distributed and federated learning approaches are becoming more popular every day. Distributed computing involves communication between devices, which requires solving two key problems: efficiency and privacy. One of the most well-known approaches to combat communication costs is to exploit the similarity of local data. Both Hessian similarity and homogeneous gradients have been studied in the literature, but separately. In this paper we combine both of these assumptions in analyzing a new method that incorporates the ideas of using data similarity and clients sampling. Moreover, to address privacy concerns, we apply the technique of additional noise and analyze its impact on the convergence of the proposed method. The theory is confirmed by training on real datasets.
Bibliography: 45 titles.
Ключевые слова: ASEG, distributed learning, federated learning, communication costs, Hessian similarity, homogeneous gradients, technique of additional noise.
Финансовая поддержка Номер гранта
Министерство науки и высшего образования Российской Федерации 075-03-2024-214
The work was done in the Laboratory of Federated Learning Problems of the Ivannikov Institute for System Programming of the Russian Academy of Sciences and supported by the Ministry of Science and Higher Education of the Russian Federation under appendix no. 2 to agreement no. 075-03-2024-214.
Поступила в редакцию: 15.08.2024
Тип публикации: Статья
УДК: 519.853.3+519.853.62
Язык публикации: английский
Образец цитирования: D. A. Bylinkin, K. D. Degtyarev, A. N. Beznosikov, “Accelerated Stochastic ExtraGradient: Mixing Hessian and gradient similarity to reduce communication in distributed and federated learning”, УМН, 79:6(480) (2024), 5–38
Цитирование в формате AMSBIB
\RBibitem{BylDegBez24}
\by D.~A.~Bylinkin, K.~D.~Degtyarev, A.~N.~Beznosikov
\paper Accelerated Stochastic ExtraGradient: Mixing Hessian and gradient similarity to reduce communication in distributed and federated learning
\jour УМН
\yr 2024
\vol 79
\issue 6(480)
\pages 5--38
\mathnet{http://mi.mathnet.ru/rm10206}
\crossref{https://doi.org/10.4213/rm10206}
Образцы ссылок на эту страницу:
  • https://www.mathnet.ru/rus/rm10206
  • https://doi.org/10.4213/rm10206
  • https://www.mathnet.ru/rus/rm/v79/i6/p5
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Успехи математических наук Russian Mathematical Surveys
     
      Обратная связь:
     Пользовательское соглашение  Регистрация посетителей портала  Логотипы © Математический институт им. В. А. Стеклова РАН, 2024