|
NUMERICAL METHODS AND DATA ANALYSIS
Towards energy-efficient neural network calculations
E. S. Noskova, I. E. Zakharov, Yu. N. Shkandybin, S. G. Rykovanov Skolkovo Institute of Science and Technology
Abstract:
Nowadays, the problem of creating high-performance and energy-efficient hardware for Artificial Intelligence tasks is very acute. The most popular solution to this problem is the use of Deep Learning Accelerators, such as GPUs and Tensor Processing Units to run neural networks. Recently, NVIDIA has announced the NVDLA project, which allows one to design neural network accelerators based on an open-source code. This work describes a full cycle of creating a prototype NVDLA accelerator, as well as testing the resulting solution by running the resnet-50 neural net-work on it. Finally, an assessment of the performance and power efficiency of the prototype NVDLA accelerator when compared to the GPU and CPU is provided, the results of which show the superiority of NVDLA in many characteristics.
Keywords:
NVDLA, FPGA, inference, deep learning accelerators
Received: 24.04.2021 Accepted: 04.09.2021
Citation:
E. S. Noskova, I. E. Zakharov, Yu. N. Shkandybin, S. G. Rykovanov, “Towards energy-efficient neural network calculations”, Computer Optics, 46:1 (2022), 160–166
Linking options:
https://www.mathnet.ru/eng/co1003 https://www.mathnet.ru/eng/co/v46/i1/p160
|
|