Успехи математических наук
RUS  ENG    ЖУРНАЛЫ   ПЕРСОНАЛИИ   ОРГАНИЗАЦИИ   КОНФЕРЕНЦИИ   СЕМИНАРЫ   ВИДЕОТЕКА   ПАКЕТ AMSBIB  
Общая информация
Последний выпуск
Архив
Импакт-фактор
Правила для авторов
Загрузить рукопись
Историческая справка

Поиск публикаций
Поиск ссылок

RSS
Последний выпуск
Текущие выпуски
Архивные выпуски
Что такое RSS



УМН:
Год:
Том:
Выпуск:
Страница:
Найти






Персональный вход:
Логин:
Пароль:
Запомнить пароль
Войти
Забыли пароль?
Регистрация


Успехи математических наук, 2024, том 79, выпуск 6(480), страницы 117–158
DOI: https://doi.org/10.4213/rm10209
(Mi rm10209)
 

Local methods with adaptivity via scaling

S. A. Chezhegovabc, S. N. Skorikb, N. Khachaturovd, D. S. Shalagine, A. A. Avetisyanbf, M. Takáčg, Y. A. Kholodove, A. N. Beznosikovbcf

a Moscow Institute of Physics and Technology (National Research University), Moscow, Russia
b Ivannikov Institute for System Programming of the Russian Academy of Sciences, Moscow, Russia
c Sber AI Lab, Moscow, Russia
d Russian-Armenian University, Yerevan, Armenia
e Innopolis University, Innopolis, Russia
f Russian Presidential Academy of National Economy and Public Administration, Moscow, Russia
g Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
Список литературы:
Аннотация: The rapid development of machine learning and deep learning has introduced increasingly complex optimization challenges that must be addressed. Indeed, training modern, advanced models has become difficult to implement without leveraging multiple computing nodes in a distributed environment. Distributed optimization is also fundamental to emerging fields such as federated learning. Specifically, there is a need to organize the training process to minimize the time lost due to communication. A widely used and extensively researched technique to mitigate the communication bottleneck involves performing local training before communication. This approach is the focus of our paper. Concurrently, adaptive methods that incorporate scaling, notably led by Adam, have gained significant popularity in recent years. Therefore, this paper aims to merge the local training technique with the adaptive approach to develop efficient distributed learning methods. We consider the classical Local SGD method and enhance it with a scaling feature. A crucial aspect is that the scaling is described generically, allowing us to analyze various approaches, including Adam, RMSProp, and OASIS, in a unified manner. In addition to the theoretical analysis, we validate the performance of our methods in practice by training a neural network.
Bibliography: 49 titles.
Ключевые слова: convex optimization, distributed optimization, adaptive methods, preconditioning.
Финансовая поддержка Номер гранта
Министерство науки и высшего образования Российской Федерации 075-03-2024-214
The work was done in the Laboratory of Federated Learning Problems of the Ivannikov Institute for System Programming of the Russian Academy of Sciences and supported by the Ministry of Science and Higher Education of the Russian Federation under appendix no. 2 to agreement no. 075-03-2024-214. The work was partially conducted while S. A. Chezhegov was a visiting research assistant in Mohamed bin Zayed University of Artificial Intelligence (MBZUAI).
Поступила в редакцию: 15.08.2024
Тип публикации: Статья
УДК: 519.853.62
Язык публикации: английский
Образец цитирования: S. A. Chezhegov, S. N. Skorik, N. Khachaturov, D. S. Shalagin, A. A. Avetisyan, M. Takáč, Y. A. Kholodov, A. N. Beznosikov, “Local methods with adaptivity via scaling”, УМН, 79:6(480) (2024), 117–158
Цитирование в формате AMSBIB
\RBibitem{CheSkoKha24}
\by S.~A.~Chezhegov, S.~N.~Skorik, N.~Khachaturov, D.~S.~Shalagin, A.~A.~Avetisyan, M.~Tak\'a{\v c}, Y.~A.~Kholodov, A.~N.~Beznosikov
\paper Local methods with adaptivity via scaling
\jour УМН
\yr 2024
\vol 79
\issue 6(480)
\pages 117--158
\mathnet{http://mi.mathnet.ru/rm10209}
\crossref{https://doi.org/10.4213/rm10209}
Образцы ссылок на эту страницу:
  • https://www.mathnet.ru/rus/rm10209
  • https://doi.org/10.4213/rm10209
  • https://www.mathnet.ru/rus/rm/v79/i6/p117
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Успехи математических наук Russian Mathematical Surveys
     
      Обратная связь:
     Пользовательское соглашение  Регистрация посетителей портала  Логотипы © Математический институт им. В. А. Стеклова РАН, 2024