Reptile: On First-Order Meta-Learning Algorithms

On First-Order Meta-Learning Algorithms

Paper:https://arxiv.org/pdf/1803.02999.pdf
Code:https://github.com/openai/supervised-reptile
Tips:OpenAi的一篇相似MAML的Meta-learning相关的paper。
(阅读笔记)

1.Main idea

  • 目标旨在实现相同分布的一类任务的少量样本快速学习。This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i.e., learns quickly) when presented with a previously unseen task sampled from this distribution.
  • 类似于first-order MAML,忽略了二阶偏微分。并且指出了其实现更为简单。
  • Reptile的作为meta-learning的方法,训练还是和传统方法很相似。Reptile is so similar to joint training that it is especially surprising that it works as a meta-learning algorithm.
  • 做出了first-order MAML和reptile的理论分析。

2.MAML回顾

回顾了MAML相关工作。
目标是求解下式,其中 τ \tau τ是不同的任务集, ϕ \phi ϕ是初始参数, L L L是损失函数, U τ k U_{\tau}^{k} Uτk表示从任务集 τ \tau τ抽样出来训练的第 k k k次的参数更新操作:
min ⁡ ϕ E τ [ L τ ( U τ k ( ϕ ) ) ] \min_{\phi}\mathbb{E}_{\tau}[L_{\tau}(U_{\tau}^{k}(\phi)) ] ϕminEτ[Lτ(Uτk(ϕ))]
A A A是原始训练任务集, B B B是新任务集。MAML的训练操作仍然对原始任务集进行训练,但是其损失函数却是针对的 B B B,如下所示:
min ⁡ ϕ E τ [ L τ , B ( U τ , A ( ϕ ) ) ] \min_{\phi}\mathbb{E}_{\tau}[L_{\tau,B}(U_{\tau,A}(\phi)) ] ϕminEτ[Lτ,B(Uτ,A(ϕ))]
找梯度即需对参数 ϕ \phi ϕ求偏导(复合函数求导):
g = ∂ L τ , B ( U τ , A ( ϕ ) ) ∂ ϕ = L τ , B ′ ( U τ , A ( ϕ ) ) × U τ , A ′ ( ϕ ) = ∂ L τ , B ( U τ , A ( ϕ ) ) ∂ U τ , A ( ϕ ) × ∂ U τ , A ( ϕ ) ∂ ϕ g=\frac{\partial L_{\tau,B}(U_{\tau,A}(\phi))}{\partial \phi}\\ \\=L_{\tau,B}'(U_{\tau,A}(\phi)) \times U_{\tau,A}'(\phi)=\frac{\partial L_{\tau,B}(U_{\tau,A}(\phi))}{\partial U_{\tau,A}(\phi)} \times \frac{\partial U_{\tau,A}(\phi)}{\partial \phi} g=ϕLτ,B(Uτ,A(ϕ))=Lτ,B(Uτ,A(ϕ))×Uτ,A(ϕ)=Uτ,A(ϕ)Lτ,B(Uτ,A(ϕ))×ϕUτ,A(ϕ)
使用恒等操作(对第二项偏微分变为常量1),得到First-order MAML为:
g = ∂ L τ , B ( U τ , A ( ϕ ) ) ∂ U τ , A ( ϕ ) g=\frac{\partial L_{\tau,B}(U_{\tau,A}(\phi))}{\partial U_{\tau,A}(\phi)} g=Uτ,A(ϕ)Lτ,B(Uτ,A(ϕ))
即损失下降梯度的方向为在任务集 A A A得到参数 ϕ \phi ϕ的情况下,通过对测试集 B B B得到的损失最小化的方向即是外循环的方向。

3.Reptile

  • 算法流程如下所示:
    Reptile: On First-Order Meta-Learning Algorithms_第1张图片
    注意到可以一次迭代中将 ϕ ~ \widetilde{\phi} ϕ 进行 k k k步后,最后才确定梯度的方向。

你可能感兴趣的:(Methodology,机器学习,深度学习)