2020-01-03 CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learn

自监督生成广义零样本

  • 摘要
  • 动机
  • 方法-SDGN
    • 3.3. Cross-Domain Feature Generating Module (CDFGM)
    • 3.4. Self-supervised Learning Module (SLM)
    • 3.5. Model Training and Prediction
    • 3.6. Discussions
  • 实验
    • 对比实验
    • 消融实验
  • 疑问

CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learning

摘要

引入自监督学习对测试数据进行域判别

动机

2020-01-03 CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learn_第1张图片
通过自学习将两个域的样本分开。

方法-SDGN

2020-01-03 CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learn_第2张图片
我现在理解呢,测试的时候,肯定是用的test_seen和test_unseen。训练的时候,可能挑出了一部分test_unseen。也就是把test_unseen分成两部分。一部分用于训练,一部分和test_seen混合作为测试数据。

3.3. Cross-Domain Feature Generating Module (CDFGM)

在这里插入图片描述
2020-01-03 CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learn_第3张图片
在这里插入图片描述

3.4. Self-supervised Learning Module (SLM)

Reference Anchor Learning.
We notice that each class can be represented as a attribute vector, where each dimension encodes a high-level visual property. This gives rise to the idea of training an attribute classifier and extracting its parameters as anchors.

For a series of attribute classifiers { g 1 , g 2 , ⋅ ⋅ ⋅ , g d e } \{ {g_1, g_2, ··· , g_{d_e}} \} {g1,g2,,gde} w.r.t. d e d_e de attributes, we extract their weight parameters as anchors: { A 1 , A 2 , ⋅ ⋅ ⋅ , A d e } \{ {A_1, A_2, ··· , A_{d_e} } \} {A1,A2,,Ade}, A i ∈ R d v A_i ∈ R_{d_v} AiRdv .
A i A_i Ai相当于是视觉空间中的每个属性的映射中心。

Domain Feature Reconstruction.
多属性标签
M i ( k ) = e x p ( < A k , x ~ i > ) ∑ j e x p ( < A j , x ~ i > ) \mathcal M_i^{(k)} = \frac {exp()} {\sum_j exp()} Mi(k)=jexp(<Aj,x~i>)exp(<Ak,x~i>)

在这里插入图片描述

Cross-Domain Triplet Mining.

2020-01-03 CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learn_第4张图片
在这里插入图片描述

3.5. Model Training and Prediction

总损失:
在这里插入图片描述
最后分类是用生成的目标样本和源样本训练一个分类网络。

3.6. Discussions

three relevant methods including CEWGANOD [22], f-VAEGAN[45] and SABR-T [27]
(1)22需要额外的分类器
(2)对比45和27,target discriminator用了多标签,类似于源域的监督信息。
(3)引入了自监督

实验

对比实验

2020-01-03 CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learn_第5张图片

消融实验

2020-01-03 CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learn_第6张图片

2020-01-03 CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learn_第7张图片

2020-01-03 CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learn_第8张图片

2020-01-03 CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learn_第9张图片

2020-01-03 CVPR 2020 Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learn_第10张图片

疑问

广义零样本下怎么使用测试数据?

你可能感兴趣的:(零样本学习)