Correlation Engine 2.0
Clear Search sequence regions


  • learn (2)
  • metric (3)
  • triplets (1)
  • Sizes of these terms reflect their relevance to your search.

    The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity/dissimilarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or triplets as the model improves. To improve this, ranking-motivated structured losses are proposed recently to incorporate multiple examples and exploit the structured information among them. In this work, we unveil two limitations of existing ranking-motivated structured losses and propose a novel ranked list loss to solve both of them. First, given a query, only a fraction of data points is incorporated to build the similarity structure. To address this, we propose to build a set-based similarity structure by exploiting all instances in the gallery. Second, previous methods aim to pull positive pairs as close as possible in the embedding space. As a result, the intraclass data distribution tends to be extremely compressed. In contrast, we propose to learn a hypersphere for each class to preserve useful similarity structure inside it, which functions as regularisation. Extensive experiments demonstrate the superiority of our proposal by comparing with the state-of-the-art methods on the fine-grained image retrieval task.

    Citation

    Xinshao Wang, Yang Hua, Elyor Kodirov, Neil M Robertson. Ranked List Loss for Deep Metric Learning. IEEE transactions on pattern analysis and machine intelligence. 2021 Mar 24;PP


    PMID: 33760730

    View Full Text