|
This article is cited in 1 scientific paper (total in 1 paper)
SPECIAL ISSUE: ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING TECHNOLOGIES
Deep metric learning: loss functions comparison
R. L. Vasileva, A. G. Dyakonovb a Yandex company, Moscow, Russia
b Central University, Moscow, Russia
Abstract:
An overview of deep metric learning methods is presented. These methods have appeared in recent years, but were compared only with their predecessors, using neural networks of currently obsolete architectures to learn embeddings (on which the metric is calculated). The described methods were compared on different datasets from several domains, using pre-trained neural networks comparable in performance to SotA (state of the art): ConvNeXt for images, DistilBERT for texts. Labeled data sets were used, divided into two parts (train and test) in such a way that the classes did not overlap (i.e., for each class its objects are fully in train or fully in test). Such a large-scale honest comparison was made for the first time and led to unexpected conclusions: some “old” methods, for example, Tuplet Margin Loss, are superior in performance to their modern modifications and methods proposed in very recent works.
Keywords:
Machine learning, deep learning, metric, similarity.
Citation:
R. L. Vasilev, A. G. Dyakonov, “Deep metric learning: loss functions comparison”, Dokl. RAN. Math. Inf. Proc. Upr., 514:2 (2023), 60–71; Dokl. Math., 108:suppl. 2 (2023), S215–S225
Linking options:
https://www.mathnet.ru/eng/danma451 https://www.mathnet.ru/eng/danma/v514/i2/p60
|
Statistics & downloads: |
Abstract page: | 91 | References: | 16 |
|