|
Sibirskii Zhurnal Vychislitel'noi Matematiki, 1998, Volume 1, Number 1, Pages 11–24
(Mi sjvm289)
|
|
|
|
This article is cited in 19 scientific papers (total in 19 papers)
Generalized approximation theorem and computational capabilities of neural networks
A. N. Gorban' Institute of Computational Modelling, Siberian Branch of the Russian Academy of Sciences, Krasnoyarsk
Abstract:
Computational capabilities of artificial neural networks are studied. In this connection comes up the classical problem on representation of function of several variables by means of superpositions and sums of functions of one variable, and appears a new edition of this problem (using only one arbitrarily chosen nonlinear function of one variable).
It has been shown that it is possible to obtain arbitrarily exact approximation of any continuous function of several variables using operations of summation and multiplication by number, superposition of functions,
linear functions and one arbitrary continuous nonlinear function of one variable. For polynomials an algebraic
variant of the theorem is proved.
For neural networks the obtained results mean that the only requirement for activation function of neuron
is nonlinearity – and nothing else.
Received: 16.09.1997
Citation:
A. N. Gorban', “Generalized approximation theorem and computational capabilities of neural networks”, Sib. Zh. Vychisl. Mat., 1:1 (1998), 11–24
Linking options:
https://www.mathnet.ru/eng/sjvm289 https://www.mathnet.ru/eng/sjvm/v1/i1/p11
|
|