|
SPECIAL ISSUE: ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING TECHNOLOGIES
Adaptive spectral normalization for generative models
E. A. Egorov, A. I. Rogachev HSE University, Moscow, Russia
Abstract:
When using Wasserstein GAN loss function for training generative adversarial networks (GAN), it is theoretically necessary to limit the discriminators’ expressive power (so called discriminator normalization). Such limitation increases the stability of GAN training at the expense of a less expressive final model. Spectral normalization is one of the normalization algorithms that involves applying a fixed operation independently to each discriminator layer. However, the optimal strength of the discriminator limitation varies for different tasks, which requires a parameterized normalization method. This paper proposes modifications to the spectral normalization algorithm that allow changing the strength of the discriminator limitation. In addition to parameterization, the proposed methods can change the degree of limitation during training, unlike the original algorithm. The quality of the obtained models is explored for each of the proposed methods.
Keywords:
generative adversarial network, wasserstein gan, spectral normalization, high energy physics.
Citation:
E. A. Egorov, A. I. Rogachev, “Adaptive spectral normalization for generative models”, Dokl. RAN. Math. Inf. Proc. Upr., 514:2 (2023), 49–59; Dokl. Math., 108:suppl. 2 (2023), S205–S214
Linking options:
https://www.mathnet.ru/eng/danma450 https://www.mathnet.ru/eng/danma/v514/i2/p49
|
Statistics & downloads: |
Abstract page: | 84 | References: | 23 |
|