Семинары
RUS  ENG    ЖУРНАЛЫ   ПЕРСОНАЛИИ   ОРГАНИЗАЦИИ   КОНФЕРЕНЦИИ   СЕМИНАРЫ   ВИДЕОТЕКА   ПАКЕТ AMSBIB  
Календарь
Поиск
Регистрация семинара

RSS
Ближайшие семинары




Семинар «Математические основы искусственного интеллекта»
4 июня 2024 г. 16:00, Совместное заседание семинара С.И. Адяна и семинара Математические основы искусственного интеллекта, г. Москва, МИАН, ауд. 110 (ул. Губкина, д. 8) + Zoom
 


An introduction to Kolmogorov complexity with applications to reinforcement learning

Bruno Bauwens

HSE Univeristy, Moscow
Видеозаписи:
MP4 3,294.4 Mb

Количество просмотров:
Эта страница:740
Видеофайлы:528
Youtube:

Bruno Bauwens
Фотогалерея



Аннотация: Ray Solomonoff considered a general version the problem of sequence prediction: how to predict the next bit of a sequence when provided with unlimited computational power? He proposed to apply Bayesian reasoning to a prior distribution defined by the output of a randomized universal Turing machine. In this talk, a gentle introduction to Kolmogorov complexity is given. We also prove its is incomputabity, and provide elegant examples of Godel and Rosser sentences. We also prove that Kolmogorov complexity is approximately equal to the negative logarithm of Solomonoff's prior distribution. This implies that Solomonoff induction has a bias towards ‘simple’ explanations. Afterwards, Hutter's solution for reinforcement learning will be discussed, which is a generalization of Solomonoff induction. He also provides a time bounded version, (which still seems not practical) and relies on a proof system for the optimization of computational resources. Finally, I briefly make some philosophical comments on Vitanyi's work on the information distance and on work that aims to understand the implicit Bayes of stochastic gradient descent in neural nets. This talk only requires basic skills in discrete mathematics. It is intended for people interested in computability theory or the foundations of machine learning.

Язык доклада: английский

Список литературы
  1. A. Shen, V. A. Uspensky, N. Vereshchagin, Kolmogorov complexity and algorithmic randomness, Math. Surveys Monogr., 220, American Mathematical Society, Providence, RI, 2017, xviii+511 pp.  mathscinet
  2. M. Li, P. Vitányi, An Introduction to Kolmogorov Complexity and Its Applications, Texts Comput. Sci., Springer, Cham., 2019, xxiii+834 pp.  mathscinet
  3. M. Hutter, Universal artificial intelligence: Sequential decisions based on algorithmic probability, Springer Science & Business Media, 2005, xx+278 pp.  crossref
  4. M. Hutter, D. Quarel, E. Catt, An Introduction to Universal Artificial Intelligence, CRC Press, 2024 https://www.routledge.com/.../book/9781032607023
  5. R. Cilibrasi, P. M. B. Vitányi, “Clustering by compression”, IEEE Trans. Inform. Theory, 51:4 (2005), 1523–1545  mathscinet
  6. D. A. Roberts, Ya. Sho, The principles of deep learning theory, Cambridge University Press, 2022  crossref
 
  Обратная связь:
 Пользовательское соглашение  Регистрация посетителей портала  Логотипы © Математический институт им. В. А. Стеклова РАН, 2024