|
This article is cited in 2 scientific papers (total in 2 papers)
Effect of transformations on the success of adversarial attacks for Clipped BagNet and ResNet image classifiers
E. O. Kurdenkovaa, M. S. Cherepninab, A. S. Chistyakovaac, K. V. Arkhipenkoa a Ivannikov Institute for System Programming of the RAS
b Technical University of Munich
c Lomonosov Moscow State University
Abstract:
Our paper compares the accuracy of the vanilla ResNet-18 model with the accuracy of the Clipped BagNet-33 and BagNet-33 models with adversarial learning under different conditions. We performed experiments on images attacked by the adversarial sticker under conditions of image transformations. The adversarial sticker is a small region of the attacked image, inside which the pixel values can be changed indefinitely, and this can generate errors in the model prediction. The transformations of the attacked images in this paper simulate the distortions that appear in the physical world when a change in perspective, scale or lighting changes the image. Our experiments show that models from the BagNet family perform poorly on images in low quality. We also analyzed the effects of different types of transformations on the models' robustness to adversarial attacks and the tolerance of these attacks.
Keywords:
adversarial attack, adversarial patch, BagNet architecture, adversarial training, projected gradient descent
Citation:
E. O. Kurdenkova, M. S. Cherepnina, A. S. Chistyakova, K. V. Arkhipenko, “Effect of transformations on the success of adversarial attacks for Clipped BagNet and ResNet image classifiers”, Proceedings of ISP RAS, 34:6 (2022), 101–116
Linking options:
https://www.mathnet.ru/eng/tisp741 https://www.mathnet.ru/eng/tisp/v34/i6/p101
|
Statistics & downloads: |
Abstract page: | 12 | Full-text PDF : | 15 |
|