持续学习——Automatic Recall Machines-Internal Replay, Continual Learning and the Brain——arxiv202006

持续学习——Automatic Recall Machines-Internal Replay, Continual Learning and the Brain——arxiv202006_第1张图片

Abstract

Replay-based methods, present a method where these auxiliary samples are generated on the fly(出发点,就是减少内存开销),也加入了神经科学的启发来加强motivation。

Introduction

learn from sequential or non-stationary data的能力(人和神经网络相比),谈到replay这一类的方法;The goal of this work, Automatic Recall Machines, is to optimally exploit the implicit memory in the tasks model for not forgetting, by using its parameters for both inference and generation.(核心想法就是说基于当前任务模型的参数来产生replay,之前的生成模型replay方法生成模型难训练,直接存样本replay方法内存开销大);每个batch,基于当前的真实样本生成一些most dissonant related samples。
Provide a formal explanation for why training with the most dissonant related samples is optimal for not forgetting,基于这个intuition that was used for buffer selection《Online continual learning with maximal interfered retrieval, NeurIPS2019》

Method

方法内容非常少,一页左右;

Conclusion

conditional replay
Key points: paper-writing一般;这篇思想有点类似于《Dreaming to Distill: Data-free Knowledge Transfer via Deep Inversion》;三步走粗读一遍;最重要的Insight是training with the most dissonant related samples is optimal for not forgetting。然后作者的replay就是设计一个方法只要当前模型就能够生成这样的样本。

你可能感兴趣的:(增量学习,博士科研,深度学习,机器学习,神经网络)