|
This article is cited in 26 scientific papers (total in 26 papers)
Topical issue (end)
Complete statistical theory of learning
V. N. Vapnikab a Columbia University, New York, USA
b Отдел исследования ИИ Фэйсбук, Нью-Йорк, США
Abstract:
Existing mathematical model of learning requires using training data find in a given subset of admissible function the function that minimizes the expected loss. In the paper this setting is called Second selection problem. Mathematical model of learning in this paper along with Second selection problem requires to solve the so-called First selection problem where using training data one first selects from wide set of function in Hilbert space an admissible subset of functions that include the desired function and second selects in this admissible subset a good approximation to the desired function. Existence of two selection problems reflects fundamental property of Hilbert space, existence of two different concepts of convergence of functions: weak convergence (that leads to solution of the First selection problem) and strong convergence (that leads to solution of the Second selection problem). In the paper we describe simultaneous solution of both selection problems for functions that belong to Reproducing Kernel Hilbert space. The solution is obtained in closed form.
Keywords:
statistical learning theory, first selection problem, second selection problem, reproducing kernel Hilbert space, training data.
Received: 13.07.2018 Revised: 05.09.2018 Accepted: 08.11.2018
Citation:
V. N. Vapnik, “Complete statistical theory of learning”, Avtomat. i Telemekh., 2019, no. 11, 24–58; Autom. Remote Control, 80:11 (2019), 1949–1975
Linking options:
https://www.mathnet.ru/eng/at15378 https://www.mathnet.ru/eng/at/y2019/i11/p24
|
Statistics & downloads: |
Abstract page: | 638 | Full-text PDF : | 164 | References: | 97 | First page: | 70 |
|