Concrete dropout

一种采用贝叶斯学习的dropoout方法变体

Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary— a prohibitive operation with large models, and an impossible one with RL.

We propose a new dropout variant which gives improved performance and better
calibrated uncertainties. Relying on recent developments in Bayesian deep learning,
we use a continuous relaxation of dropout’s discrete masks.

Together with a principled optimisation objective, this allows for automatic tuning of the dropout
probability in large models, and as a result faster experimentation cycles.

In RL this allows the agent to adapt its uncertainty dynamically as more data is observed.

We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers.

你可能感兴趣的:(Concrete dropout)