搜索论文: Augmentation Invariant and Instance Spreading Feature for Softmax Embedding
搜索论文: http://www.studyai.com/search/whole-site/?q=Augmentation+Invariant+and+Instance+Spreading+Feature+for+Softmax+Embedding
Task analysis; Visualization; Testing; Training; Unsupervised learning; Data mining; Unsupervised learning; instance feature; softmax embedding; embedding learning; data augmentation
机器学习; 机器视觉
监督学习; 无监督学习; 细粒度视觉; 数据扩增; (深度)嵌入学习
Deep embedding learning plays a key role in learning discriminative feature representations, where the visually similar samples are pulled closer and dissimilar samples are pushed away in the low-dimensional embedding space.
深度嵌入学习在学习区分性特征表示中起着关键作用,在低维嵌入空间中,视觉上相似的样本被拉近,不相似的样本被推开。.
This paper studies the unsupervised embedding learning problem by learning such a representation without using any category labels.
本文通过在不使用任何类别标签的情况下学习这种表示来研究无监督嵌入学习问题。.
This task faces two primary challenges: mining reliable positive supervision from highly similar fine-grained classes, and generalizing to unseen testing categories.
这项任务面临两个主要挑战:从高度相似的细粒度类中挖掘可靠的正面监督,以及推广到看不见的测试类别。.
To approximate the positive concentration and negative separation properties in category-wise supervised learning, we introduce a data augmentation invariant and instance spreading feature using the instance-wise supervision.
为了近似分类监督学习中的正集中和负分离特性,我们引入了一种基于实例监督的数据增强不变量和实例扩展特性。.
We also design two novel domain-agnostic augmentation strategies to further extend the supervision in feature space, which simulates the large batch training using a small batch size and the augmented features.
我们还设计了两种新的领域不可知扩充策略,以进一步扩展特征空间中的监督,该策略模拟了使用小批量和扩充特征的大批量训练。.
To learn such a representation, we propose a novel instance-wise softmax embedding, which directly perform the optimization over the augmented instance features with the binary discrmination softmax encoding.
为了了解这种表示,我们提出了一种新的基于实例的softmax嵌入方法,该方法直接使用二进制解析softmax编码对增强的实例特征进行优化。.
It significantly accelerates the learning speed with much higher accuracy than existing methods, under both seen and unseen testing categories.
在可见和不可见的测试类别下,它显著加快了学习速度,比现有方法具有更高的准确性。.
The unsupervised embedding performs well even without pre-trained network over samples from fine-grained categories.
无监督嵌入即使在没有预先训练的网络的情况下也能在细粒度类别的样本上表现良好。.
We also develop a variant using category-wise supervision, namely category-wise softmax embedding, which achieves competitive performance over the state-of-of-the-arts, without using any auxiliary information or restrict sample mining…
我们还开发了一种使用分类监督的变体,即分类softmax嵌入,它在不使用任何辅助信息或限制样本挖掘的情况下,在现有技术的基础上实现了具有竞争力的性能。。.
[‘Mang Ye’, ‘Jianbing Shen’, ‘Xu Zhang’, ‘Pong C. Yuen’, ‘Shih-Fu Chang’]