Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and

CVPR2022

Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and_第1张图片

 

整体结构:

Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and_第2张图片

 

reservoir sampling 原本的方法:

Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and_第3张图片

 Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and_第4张图片Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and_第5张图片

K取值为8,memory的size为512

损失:

总共两部分loss:

Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and_第6张图片 

实际上直接用来更新网络参数和分类器参数的loss:

这个是泰勒展开式的原本形式:

Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and_第7张图片

 

结果:

Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and_第8张图片

Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and_第9张图片 

问题:

这篇论文为什么要用那个ID-wise的reservoir sampling?

这里,为了尽可能地保留不同的ID信息,将reservoir sampling算法从instance-wise修改为ID-wise。Memory队列长度是512,队列中的每一个元素是对应k个,如果有某个identity对应feature实际上不满k个,就用增强的feature填充。512是代表512个identity,所以是id-wise的。

  1. 这篇论文借鉴了MAML,那么是如何使用MAML的思想的?

论文借鉴了模型不可知论元学习(MAML),它将优化划分为元训练和元测试过程,这样模型就会沿着元训练和元测试之间的对齐方向更新。为了协调适应和抗遗忘,论文在每次参数更新迭代中将这两个任务(抗遗忘的任务和适应新数据的任务)看作元训练和元测试,而不是像MAML原来论文那样将样本分为元训练和元测试。假设最初的网络参数是θ,会在用LAdap上计算的梯度更新后的网络(网络参数为)计算LAntiF损失,和之前计算的LAdap求和作为最终更新网络的损失。

 

3、其中计算Lkl的时候,i属于old batch和new batch的交集,我觉得论文里可能写错了,应该属于并集,因为old batch和new batch是采样不同样本的,取交集意义不大。取并集的话就和Lrel保持一致了。

你可能感兴趣的:(深度学习,人工智能,机器学习)