Meta-Learning in Neural Networks: A Survey 笔记

这篇survey解决了我的那些疑惑?

  1. meta learning 的动机是什么:generalization performance or learning speed of the inner algorithm.

  2. Feed-forward-model(FFM)如何理解:Here ω is the hypernetwork and it synthesises θ given the source dataset in a feed-forward pass,\Theta =g_w(D^{train}).
  3. Main challenges and Chance of meta learning: diverse task distributions (In vanilla multi-task learning, this phenomenon is relatively well studied with, e.g., methods that group tasks into clusters or subspaces. However this is only just beginning to be explored in meta-learning)

我的疑惑:

是meta learning 成就了few shot learning,还是few shot learning 成就了meta learning?

你可能感兴趣的:(Meta-Learning in Neural Networks: A Survey 笔记)