论文名称 | Proactive Privacy-preserving Learning for Retrieval |
---|---|
作者 | Peng-Fei Zhang (University of Queensland) |
会议/出版社 | AAAI 2021 |
在线pdf | |
代码 |
无代码 |
概要:
背景:
方法:
训练一个用于生成 adversarial data 的 generater
loss function
generation process
z i = x i + G ( x i ; ϑ g ) z_{i}=x_{i}+\mathcal{G}\left(x_{i} ; \vartheta_{g}\right) zi=xi+G(xi;ϑg)
s.t. ∥ G ( x i ; ϑ g ) ∥ p ≤ ϵ , p ∈ { 1 , 2 , ∞ } \left\|\mathcal{G}\left(x_{i} ; \vartheta_{g}\right)\right\|_{p} \leq \epsilon, p \in\{1,2, \infty\} ∥G(xi;ϑg)∥p≤ϵ,p∈{1,2,∞}
Affinity-preserving Loss
Privacy-preserving Loss
J p = α D ^ s + β J s + γ J e \mathcal{J}_{p}=\alpha \widehat{\mathcal{D}}_{s}+\beta \mathcal{J}_{s}+\gamma \mathcal{J}_{e} Jp=αD s+βJs+γJe
具体细节阅读文章,loss 还有一些原理不太明白,不过该 loss 的主要目的是为了减小两个梁宇之间的差异
训练过程
Objective Function
这里的训练过程是一种对抗性训练方式,the generator is updated to enlarge the gap between the adversarial data and the original data,the surrogate model is trained with the opposing objective that is to maintain the search performance.
ϑ ^ g = arg min ϑ g J o ( ϑ g , ϑ f ) − J p ( ϑ g , ϑ f ) \hat{\vartheta}_{g}=\arg \min _{\vartheta_{g}} \mathcal{J}_{o}\left(\vartheta_{g}, \vartheta_{f}\right)-\mathcal{J}_{p}\left(\vartheta_{g}, \vartheta_{f}\right) ϑ^g=argminϑgJo(ϑg,ϑf)−Jp(ϑg,ϑf),
ϑ ^ f = arg min ϑ f J o ( ϑ g , ϑ f ) + J p ( ϑ g , ϑ f ) . \hat{\vartheta}_{f}=\arg \min _{\vartheta_{f}} \mathcal{J}_{o}\left(\vartheta_{g}, \vartheta_{f}\right)+\mathcal{J}_{p}\left(\vartheta_{g}, \vartheta_{f}\right) . ϑ^f=argminϑfJo(ϑg,ϑf)+Jp(ϑg,ϑf).
对抗性训练的过程运用到了 Gradient Reversal Layer
来自论文-Unsupervised Domain Adaptation by Backpropagation 是关于 domain adaptation 的文章