|
This article is cited in 5 scientific papers (total in 5 papers)
General numerical methods
Reduced-order modeling of deep neural networks
J. V. Gusaka, T. K. Daulbaeva, I. V. Oseledetsab, E. S. Ponomareva, A. S. Cichockia a Skolkovo Institute of Science and Technology, Moscow, Russia
b Marchuk Institute of Numerical Mathematics of the Russian Academy of Sciences, Moscow
Abstract:
We introduce a new method for speeding up the inference of deep neural networks. It is somewhat inspired by the reduced-order modeling techniques for dynamical systems. The cornerstone of the proposed method is the maximum volume algorithm. We demonstrate efficiency on neural networks pre-trained on different datasets. We show that in many practical cases it is possible to replace convolutional layers with much smaller fully-connected layers with a relatively small drop in accuracy.
Key words:
acceleration of neural networks, MaxVol, machine learning, component analysis.
Received: 24.12.2020 Revised: 24.12.2020 Accepted: 14.01.2021
Citation:
J. V. Gusak, T. K. Daulbaev, I. V. Oseledets, E. S. Ponomarev, A. S. Cichocki, “Reduced-order modeling of deep neural networks”, Zh. Vychisl. Mat. Mat. Fiz., 61:5 (2021), 800–812; Comput. Math. Math. Phys., 61:5 (2021), 774–785
Linking options:
https://www.mathnet.ru/eng/zvmmf11239 https://www.mathnet.ru/eng/zvmmf/v61/i5/p800
|
|