李宏毅Reinforcement Learning强化学习入门笔记

文章目录

      • Concepts in Reinforcement Learning
      • Difficulties in RL
      • A3C Method Brief Introduction
      • Policy-based Approach - Learn an Actor (Policy Gradient Method)
          • 1. Decide Function of Actor Model (NN? ...)
          • 2. Decide Goodness of this Function
          • 3. Choose the best function
        • On-Policy v.s. Off-Policy
        • Importance Sampling (On-Policy → \rightarrow Off-Policy)
        • PPO Algorithm —— Proximal Policy Optimization
      • Value-based Approach - Learn an Critic
        • Q-learning
        • Double DQN
        • Other Advanced Structure of Q-Learning
      • A3C Method - Asynchronous Advantage Actor-Critic
        • Advantage Actor-Critic (A2C Method)
        • Asynchronous Advantage Actor-Critic (A3C Method)
      • Sparse Reward
        • Reward Shaping
        • Curriculum Learning
        • Hierarchical Reinforcement Learning
        • ICM —— Intrinsic Curiosity Module

Concepts in Reinforcement Learning

  1. The main goal of Reinforcement Learning is to maximum the Total Reward.
  2. Total Reward is the sum of all reward in One Eposide, so the model doesn’t know which steps in this episode are good and which are bad.
  3. Only few actions can get the positive reward (ex: fire and killing the enemy in Space War game gets positive reward but moving gets no reward), so how to let the model find these right actions is very important.

Difficulties in RL

  1. Reward Delay
    • Only “Fire” can obtain rewards, but moving before fire is also important (moving has no reward), how to let the model learn to move properly?
    • In chess game, it may be better to sacrifice immediate reward to gain more long-term reward.
  2. Agent’s actions may affect the environment
    • How to explore the world (observation) as more as possible.
    • How to explore the action-combination as more as possible.

A3C Method Brief Introduction

The A3C method is the most popular model which combines policy-based method and value-based method, the structure is shown as below. To learn A3C model, we need to know the concepts of policy-based and value-based. The details of A3C are shown [here](#A3C Method - Asynchronous Advantage Actor-Critic).

李宏毅Reinforcement Learning强化学习入门笔记_第1张图片

Policy-based Approach - Learn an Actor (Policy Gradient Method)

This approach try to learn a policy(also called actor). It accepts the observation as input, and output an action. The policy(actor) can be any model. If you u s e use use an Neural Network to as your actor, then you are doing Deep Reinforcement Learning.

I n p u t ( O b s e r v a t i o n ) → A c t o r / P o l i c y → O u t p u t ( A c t i o n ) Input(Observation) \rightarrow Actor/Policy \rightarrow Output(Action) Input(Observation)Actor/PolicyOutput(Action)

There are three steps to build DRL:

1. Decide Function of Actor Model (NN? …)

Here we use the NN as our Actor, so:

  • The Input of this NN is the observation of machine represented as Vector or Matrix. (Ex: Image Pixels to Matrix)
  • The Output of this NN is Action Probability. The most important point is that we shouldn’t always choose the action which has the highest probability, it should be a stochastic decisions according to the probability distribution.
  • The Advantage of NN to Q-table is: we can’t enumerate all observations (such as we can’t list all pixels’ combinations of a game) in some complex scenes, then we can use Neural Network to promise that we can always obtain an output even if this observation didn’t appear in the previous train set.
李宏毅Reinforcement Learning强化学习入门笔记_第2张图片
2. Decide Goodness of this Function

Since we use the Neural Network as our function model, we need to decide what is the goodness of this model (a standard to judge the performance of current model). We use R ( θ ) ‾ \overline{R(\theta)} R(θ) to express this standard, which θ \theta θ is the parameters of current model.

  • Given an actor π θ ( t ) \pi_\theta(t) πθ(t) with Network-Parameters θ \theta θ, t t t is the observation (input).
  • Use the actor π θ ( t ) \pi_\theta(t) πθ(t) to play the video game until this game finished.
  • Sum all rewards in this episode and marked as R ( θ ) → R ( θ ) = ∑ t = 1 T r t R(\theta) \rightarrow R(\theta) = \sum_{t=1}^Tr_t R(θ)R(θ)=t=1Trt.
    Note: R ( θ ) R(\theta) R(θ) is a variable, cause even if we use the same actor π θ ( t ) \pi_\theta(t) πθ(t) to play the same game many times, we can get the different R ( θ ) R(\theta) R(θ) (random mechanism in game and action chosen). So we want to maximum the R ( θ ) ‾ \overline{R(\theta)} R(θ) which expresses the expect of R ( θ ) R(\theta) R(θ).
  • Use the R ( θ ) ‾ \overline{R(\theta)} R(θ) to expresses the goodness of π θ ( t ) \pi_\theta(t) πθ(t).

How to Calculate the R ( θ ) R(\theta) R(θ)?

  • An episode is considered as a trajectory τ \tau τ

    • τ \tau τ = { s 1 , a 1 , r 1 , s 2 , a 2 , r 2 , . . . , s T , a T , r T s_1, a_1, r_1, s_2, a_2, r_2, ..., s_T, a_T, r_T s1,a1,r1,s2,a2,r2,...,sT,aT,rT} → \rightarrow all the history in this episode
    • R ( τ ) = ∑ t = 1 T r t R(\tau) = \sum_{t=1}^Tr_t R(τ)=t=1Trt
  • Different τ \tau τ has different probability to appear, the probability of τ \tau τ is depending on the parameter θ \theta θ of actor π θ ( t ) \pi_\theta(t) πθ(t). So we define the probability of τ \tau τ as P ( τ ∣ θ ) P(\tau|\theta) P(τθ).
    R ( θ ) ‾ = ∑ τ P ( τ ∣ θ ) R ( τ ) \overline{R(\theta)} = \sum_\tau{P(\tau|\theta)R(\tau)} R(θ)=τP(τθ)R(τ)

  • We use actor π θ ( t ) \pi_\theta(t) πθ(t) to play N times game, obtain the list { τ 1 , τ 2 , . . . , τ N \tau^1, \tau^2, ..., \tau^N τ1,τ2,...,τN}. Each τ n \tau^n τn has a reward R ( τ n ) R(\tau^n) R(τn), the mean of these R ( τ n ) R(\tau^n) R(τn) approximate equals to the expect R ( θ ) ‾ \overline{R(\theta)} R(θ).
    R ( θ ) ‾ ≈ 1 N ∑ n = 1 N R ( τ n ) \overline{R(\theta)} \approx \frac{1}{N}\sum_{n=1}^NR(\tau^n) R(θ)N1n=1NR(τn)

3. Choose the best function

Now we need to know how to calculate the θ \theta θ, here we use the Gradient Ascend method.

  • problem statements:
    θ ∗ = a r g m a x θ R ( θ ) ‾ → R ( θ ) ‾ = ∑ τ P ( τ ∣ θ ) R ( τ ) \theta^* = argmax_\theta\overline{R(\theta)} \rightarrow \overline{R(\theta)} = \sum_{\tau}P(\tau|\theta)R(\tau) θ=argmaxθR(θ)R(θ)=τP(τθ)R(τ)
  • gradient ascent:
    • Start with θ 0 \theta^0 θ0.
    • θ 1 = θ 0 + η ▽ R ( θ 0 ) ‾ \theta^1 = \theta^0 + \eta\bigtriangledown{\overline{R(\theta^0)}} θ1=θ0+ηR(θ0)
    • θ 2 = θ 1 + η ▽ R ( θ 1 ) ‾ \theta^2 = \theta^1 + \eta\bigtriangledown{\overline{R(\theta^1)}} θ2=θ1+ηR(θ1)
  • The θ \theta θ includes the parameters in the current Neural Network, $\theta = $ { w 1 , w 2 , w 3 , . . . , b 1 , b 2 , b 3 , . . . w_1, w_2, w_3, ..., b_1, b_2, b_3, ... w1,w2,w3,...,b1,b2,b3,...}, which the ▽ R ( θ ) = [ ∂ R ( θ ) ∂ w 1 ∂ R ( θ ) ∂ w 2 . . . ∂ R ( θ ) ∂ b 1 ∂ R ( θ ) ∂ b 2 . . . ] \bigtriangledown R(\theta) = \left[ \begin{matrix} \frac{\partial{R(\theta)}}{\partial{w_1}} \\ \frac{\partial{R(\theta)}}{\partial{w_2}} \\ ... \\ \frac{\partial{R(\theta)}}{\partial{b_1}} \\ \frac{\partial{R(\theta)}}{\partial{b_2}} \\ ... \end{matrix} \right] R(θ)=w1R(θ)w2R(θ)...b1R(θ)b2R(θ)....

It’s time to calculate the gradient of R ( θ ) = ∑ τ P ( τ ∣ θ ) R ( τ ) R(\theta) = \sum_{\tau}P(\tau|\theta)R(\tau) R(θ)=τP(τθ)R(τ), since R ( τ ) R(\tau) R(τ) has nothing to do with θ \theta θ, the gradient can be expressed as:

▽ R ( θ ) = ∑ τ R ( τ ) ▽ P ( τ ∣ θ ) = ∑ τ R ( τ ) P ( τ ∣ θ ) ▽ P ( τ ∣ θ ) P ( τ ∣ θ ) = ∑ τ R ( τ ) P ( τ ∣ θ ) ▽ l o g P ( τ ∣ θ ) \bigtriangledown{R(\theta)} = \sum_\tau{R(\tau)\bigtriangledown{P(\tau|\theta)}} = \sum_\tau{R(\tau)P(\tau|\theta)\frac{\bigtriangledown{P(\tau|\theta)}}{P(\tau|\theta)}} = \sum_\tau{R(\tau)P(\tau|\theta)\bigtriangledown{logP(\tau|\theta)}} R(θ)=τR(τ)P(τθ)=τR(τ)P(τθ)P(τθ)P(τθ)=τR(τ)P(τθ)logP(τθ)

Note: d l o g ( f ( x ) ) d x = 1 f ( x ) d f ( x ) d x \frac{dlog(f(x))}{dx} = \frac{1}{f(x)}\frac{df(x)}{dx} dxdlog(f(x))=f(x)1dxdf(x)

Use θ \theta θ policy play the game N times, obtain { τ 1 , τ 2 , τ 3 , . . . \tau_1, \tau_2, \tau_3, ... τ1,τ2,τ3,...}:
▽ R ( θ ) ≈ 1 N ∑ n = 1 N R ( τ n ) ▽ l o g P ( τ ∣ θ ) \bigtriangledown{R(\theta)} \approx \frac{1}{N}\sum_{n=1}^N{R(\tau^n)\bigtriangledown{logP(\tau|\theta)}} R(θ)N1n=1NR(τn)logP(τθ)

How to Calculate the ▽ l o g P ( τ ∣ θ ) \bigtriangledown{logP(\tau|\theta)} logP(τθ)?

Since τ \tau τ is the history of one episode, so:
KaTeX parse error: No such environment: align* at position 8: \begin{̲a̲l̲i̲g̲n̲*̲}̲ P(\tau|\theta)…

Ignore the terms which not related to θ \theta θ:

▽ l o g P ( τ ∣ θ ) = ∑ t = 1 T ▽ l o g P ( a t ∣ s t , θ ) \bigtriangledown{logP(\tau|\theta)} = \sum_{t=1}^T{\bigtriangledown{logP(a_t|s_t, \theta)}} logP(τθ)=t=1TlogP(atst,θ)

So the final result of ▽ R ( θ ) ‾ \bigtriangledown\overline{R(\theta)} R(θ) is :

▽ R ( θ ) ‾ = 1 N ∑ n = 1 N ∑ t = 1 T R ( τ n ) ▽ l o g P ( a t n ∣ s t n , θ ) \bigtriangledown\overline{R(\theta)} = \frac{1}{N}\sum_{n=1}^N\sum_{t=1}^T{R(\tau^n)\bigtriangledown{logP(a_t^n|s_t^n, \theta)}} R(θ)=N1n=1Nt=1TR(τn)logP(atnstn,θ)

The meaning of this equation is very clear:

  • if R ( τ n ) R(\tau^n) R(τn) is positive → \rightarrow tune θ \theta θ to increase the P ( a t n ∣ s t n ) P(a_t^n|s_t^n) P(atnstn).
  • if R ( τ n ) R(\tau^n) R(τn) is negative → \rightarrow tune θ \theta θ to decrease the P ( a t n ∣ s t n ) P(a_t^n|s_t^n) P(atnstn)

Use this method can resolve the [Reward Delay Problem](#Difficulties in RL) in Difficulties in RL chapter, because here we use the cumulative reward of one entire episode R ( τ n ) R(\tau^n) R(τn), not just the immediate reward after taking one action.

Add a Baseline - b

To avoid all of R ( τ n ) R(\tau^n) R(τn) is positive (there should be some negative reward to tell model don’t take this action at this state), we can add a baseline. So the equation changes to:
▽ R ( θ ) ‾ = 1 N ∑ n = 1 N ∑ t = 1 T ( R ( τ n ) − b ) ▽ l o g P ( a t n ∣ s t n , θ ) \bigtriangledown\overline{R(\theta)} = \frac{1}{N}\sum_{n=1}^N\sum_{t=1}^T{(R(\tau^n) - b)\bigtriangledown{logP(a_t^n|s_t^n, \theta)}} R(θ)=N1n=1Nt=1T(R(τn)b)logP(atnstn,θ)

Assign Suitable Weight of each Action

Use the total reward R ( τ ) R(\tau) R(τ) to tune the all actions’ probability in this episode also has some disadvantage, show as below:

李宏毅Reinforcement Learning强化学习入门笔记_第3张图片

The left picture show one episode whose total reward R is 5, so the probabilities of all actions in this episode will be increased (such as x5), but the main positive reward obtained from the a 1 a_1 a1, while a 2 a_2 a2 and a 3 a_3 a3 didn’t give any positive reward, but the probability of a 2 a_2 a2 and a 3 a_3 a3 also be increased in this example. Same as right picture, a 1 a_1 a1 is a bad action, but a 2 a_2 a2 may not be a bad action, so probability of a 2 a_2 a2 shouldn’t be decreased.

李宏毅Reinforcement Learning强化学习入门笔记_第4张图片

To avoid this problem, we assign different R R R to each a t a_t at, the R R R is the cumulation of r t r_t rt which is the sum of all rewards obtained after a t a_t at, now the equation becomes:

▽ R ( θ ) ‾ = 1 N ∑ n = 1 N ∑ t = 1 T ( ∑ t ′ = t T γ t ′ − t r t ′ n − b ) ▽ l o g P ( a t n ∣ s t n , θ ) \bigtriangledown\overline{R(\theta)} = \frac{1}{N}\sum_{n=1}^N\sum_{t=1}^T{(\sum_{t'=t}^T{\gamma^{t' -t}r_{t'}^n} - b)\bigtriangledown{logP(a_t^n|s_t^n, \theta)}} R(θ)=N1n=1Nt=1T(t=tTγttrtnb)logP(atnstn,θ)

Note: γ \gamma γ called discount factor, γ < 1 \gamma < 1 γ<1.

We can use A θ ( s t , a t ) A^\theta(s_t, a_t) Aθ(st,at) to express the ( ∑ t ′ = t T γ t ′ − t r t ′ n − b ) (\sum_{t'=t}^T{\gamma^{t' -t}r_{t'}^n} - b) (t=tTγttrtnb) in above equation, which called Advantage Function. This function evaluate how good it is if we take a t a_t at at this state s t s_t st rather than other actions.

On-Policy v.s. Off-Policy

On-Policy and Off-Policy are two different modes of learning:

  • On-Policy: The agent learn the rules by interacting with environment. (learn from itself)
  • Off-Policy: The agent learn the rules by watching others’ interacting with environment. (learn from others)

Our Policy Gradient Method is an On-Policy learning mode, so why we need Off-Policy mode? This is because we use sampling N times and get the mean value to approximate the expect R ( θ ) ‾ = ∑ τ P ( τ ∣ θ ) R ( τ ) \overline{R(\theta)} = \sum_\tau{P(\tau|\theta)R(\tau)} R(θ)=τP(τθ)R(τ). But when we update the θ \theta θ, the P ( τ ∣ θ ) P(\tau|\theta) P(τθ) changed, so we need to do N sampling again and get the mean value. This will take a lot of time to do sampling after we update θ \theta θ. The resolution is, we build a model π θ \pi_\theta πθ, this model accept the training data from the other model π θ ′ \pi_{\theta'} πθ. Use π θ ′ \pi_{\theta'} πθ to collect data, and train the θ \theta θ with θ ′ \theta' θ, since don’t change θ ′ \theta' θ, the sampling data can be reused.

Importance Sampling (On-Policy → \rightarrow Off-Policy)

Importance Sampling is a method to get the expect of one function E x ∼ p ( p ( x ) ) E_{x\sim{p}}(p(x)) Exp(p(x)) by sampling another function q ( x ) q(x) q(x). Since we have already known:

E x ∼ p [ f ( x ) ] ≈ 1 N ∑ i = 1 N f ( x i ) E_{x\sim{p}}[f(x)] \approx \frac{1}{N}\sum_{i=1}^N{f(x^i)} Exp[f(x)]N1i=1Nf(xi)

But if we only have { x i x^i xi} sampled from q ( x ) q(x) q(x), how to use this samples to calculate the E [ p ( x ) ] E[p(x)] E[p(x)]? We can change equation above:

E x ∼ p [ f ( x ) ] = ∫ p ( x ) f ( x ) d x = ∫ f ( x ) p ( x ) q ( x ) q ( x ) d x = E x ∼ q [ f ( x ) p ( x ) q ( x ) ] E_{x\sim{p}}[f(x)] = \int{p(x)f(x)}dx = \int{f(x)\frac{p(x)}{q(x)}q(x)}dx = E_{x\sim{q}}[f(x)\frac{p(x)}{q(x)}] Exp[f(x)]=p(x)f(x)dx=f(x)q(x)p(x)q(x)dx=Exq[f(x)q(x)p(x)]

That means we can get the expect of distribution p ( x ) p(x) p(x) by sampling the { x i x^i xi} from another distribution q ( x ) q(x) q(x), only need to do some rectification, p ( x ) q ( x ) \frac{p(x)}{q(x)} q(x)p(x) called rectification term. Now we can consider our π θ \pi_\theta πθ model as p ( x ) p(x) p(x), the π θ ′ \pi_{\theta'} πθ as q ( x ) q(x) q(x), use the q ( x ) q(x) q(x) to sample data to tune p ( x ) p(x) p(x).

▽ R ( θ ) ‾ = E τ ∼ p θ ( τ ) [ R ( τ ) ▽ l o g p θ ( τ ) ] = E τ ∼ p θ ′ ( τ ) [ p θ ( τ ) p θ ′ ( τ ) R ( τ ) ▽ l o g p θ ( τ ) ] \bigtriangledown{\overline{R(\theta)}} = E_{\tau\sim{p_\theta(\tau)}}[R(\tau)\bigtriangledown{logp_\theta(\tau)}] = E_{\tau\sim{p_{\theta'(\tau)}}}[\frac{p_\theta(\tau)}{p_{\theta'}(\tau)}R(\tau)\bigtriangledown{logp_\theta(\tau)}] R(θ)=Eτpθ(τ)[R(τ)logpθ(τ)]=Eτpθ(τ)[pθ(τ)pθ(τ)R(τ)logpθ(τ)]

then we can use θ ′ \theta' θ to sample many times and train θ \theta θ many times. After many iterations, we update θ ′ \theta' θ. Continue to transform the equation:

E ( s t , a t ) ∼ π θ [ A θ ( s t , a t ) ▽ l o g p θ ( a t n ∣ s t n ) ] = E ( s t , a t ) ∼ π θ ′ [ P θ ( s t , a t ) P θ ′ ( s t , a t ) A θ ′ ( s t , a t ) ▽ l o g p θ ( a t n ∣ s t n ) ] E_{(s_t, a_t)\sim{\pi_\theta}}[A^{\theta}(s_t, a_t)\bigtriangledown{logp_\theta(a_t^n|s_t^n)}] = E_{(s_t, a_t)\sim{\pi_{\theta'}}}[\frac{P_\theta(s_t, a_t)}{P_{\theta'}(s_t, a_t)}A^{\theta'}(s_t, a_t)\bigtriangledown{logp_\theta(a_t^n|s_t^n)}] E(st,at)πθ[Aθ(st,at)logpθ(atnstn)]=E(st,at)πθ[Pθ(st,at)Pθ(st,at)Aθ(st,at)logpθ(atnstn)]

Let the P θ ′ ( s t , a t ) = P θ ′ ( a t ∣ s t ) P θ ′ ( s t ) P_{\theta'}(s_t, a_t) = P_{\theta'}(a_t|s_t)P_{\theta'}(s_t) Pθ(st,at)=Pθ(atst)Pθ(st), and P θ ( s t , a t ) = P θ ( a t ∣ s t ) P θ ( s t ) P_{\theta}(s_t, a_t) = P_{\theta}(a_t|s_t)P_{\theta}(s_t) Pθ(st,at)=Pθ(atst)Pθ(st). We consider the environment observation s s s is not related to actor θ \theta θ (ignore the environment changing by action), then P θ ( s t ) = P θ ′ ( s t ) P_{\theta}(s_t) = P_{\theta'}(s_t) Pθ(st)=Pθ(st), equation becomes:

E ( s t , a t ) ∼ π θ ′ [ P θ ( a t ∣ s t ) P θ ′ ( a t ∣ s t ) A θ ′ ( s t , a t ) ▽ l o g p θ ( a t n ∣ s t n ) ] E_{(s_t, a_t)\sim{\pi_{\theta'}}}[\frac{P_{\theta}(a_t|s_t)}{P_{\theta'}(a_t|s_t)}A^{\theta'}(s_t, a_t)\bigtriangledown{logp_\theta(a_t^n|s_t^n)}] E(st,at)πθ[Pθ(atst)Pθ(atst)Aθ(st,at)logpθ(atnstn)]

Here defines:

J θ ′ ( θ ) = E ( s t , a t ) ∼ π θ ′ [ P θ ( a t ∣ s t ) P θ ′ ( a t ∣ s t ) A θ ′ ( s t , a t ) ] J^{\theta'}(\theta) = E_{(s_t, a_t)\sim{\pi_{\theta'}}}[\frac{P_\theta(a_t|s_t)}{P_{\theta'}(a_t|s_t)}A^{\theta'}(s_t, a_t)] Jθ(θ)=E(st,at)πθ[Pθ(atst)Pθ(atst)Aθ(st,at)]

Note: Since we use θ ′ \theta' θ to sample data for θ \theta θ, the distribution of θ \theta θ can’t be very different from θ ′ \theta' θ, how to determine the difference between two distribution and end the model training if θ ′ \theta' θ is distinct from θ \theta θ? Now let’s start to learn PPO Algorithm.

PPO Algorithm —— Proximal Policy Optimization

PPO is the resolution of above question, it can avoid the problem which raised from the difference between θ \theta θ and θ ′ \theta' θ . The target function shows as below:
J P P O θ ′ ( θ ) = J θ ′ ( θ ) − β K L ( θ , θ ′ ) J_{PPO}^{\theta'}(\theta) = J^{\theta'}(\theta) - \beta KL(\theta, \theta') JPPOθ(θ)=Jθ(θ)βKL(θ,θ)
which the K L ( θ , θ ′ ) KL(\theta, \theta') KL(θ,θ) is the divergence of output action from policy θ \theta θ and policy θ ′ \theta' θ. The algorithm flow is:

  • Initial Policy parameters θ \theta θ

  • In each iteration:

    • Using θ k \theta^k θk to interact with the environment, and collect { s t , a t {s_t, a_t} st,at} to calculate the A θ k ( s t , a t ) A^{\theta^k}(s_t, a_t) Aθk(st,at)
  • Update the J P P O θ ′ ( θ ) J_{PPO}^{\theta'}(\theta) JPPOθ(θ) several times: $ J_{PPO}{\thetak}(\theta) = J{\thetak}(\theta) - \beta KL(\theta, \theta^k)$

  • If K L ( θ , θ k ) > K L m a x KL(\theta, \theta^k) > KL_{max} KL(θ,θk)>KLmax, that means KL part takes too big importance of this equation, increase β \beta β

  • If K L ( θ , θ k ) < K L m i n KL(\theta, \theta^k) < KL_{min} KL(θ,θk)<KLmin, that means KL part takes lower importance of this equation, decrease β \beta β

Value-based Approach - Learn an Critic

A critic doesn’t choose an action (it’s different from actor), it evaluates the performance of a given actor. So an actor can be found from a critic.

Q-learning

Q-Learning is a classical value-based method, it evaluates the score of an observation under an actor π \pi π, this function is called state value function V π ( s ) V^\pi(s) Vπ(s). The score is calculated as the total reward from current observation to the end of this episode.

李宏毅Reinforcement Learning强化学习入门笔记_第5张图片

How to estimate V π ( s ) V^\pi(s) Vπ(s)?

We know we need to calculate the total reward to express the performance of current actor π θ \pi_\theta πθ, but how to get this value?

  • Monte-Carlo based approach

In the current state S a S_a Sa (observation), until the end of this episode, the cumulated reward is G a G_a Ga; In the current state S b S_b Sb (observation), until the end of this episode, the cumulated reward is G b G_b Gb. That means we can estimate the value of an observation s a s_a sa under an actor π θ \pi_ \theta πθ, the low value could be explain as two possibilities:

a) the current observation is bad, even if a good actor can not get a high value.

b) the actor has a bad performance.

In many cases, we can’t enumerate all observations to calculate the all rewards G i G_i Gi. The resolution is using a Neural-Network to fit the function from observation to value G G G.

Fit the NN with ( S a , G ( a ) ) (S_a, G(a)) (Sa,G(a)), try to minimize the difference between the NN output V π ( S a ) V_\pi(S_a) Vπ(Sa) and Monte-Carlo reward G ( a ) G(a) G(a).

  • Temporal-Difference approach

MC approach is worked, but the problem is you must get the total reward in the end of one episode. It may be a very long way to reach the end state in some cases, Temporal-Difference approach could address this problem.

李宏毅Reinforcement Learning强化学习入门笔记_第6张图片

here is a trajectory { . . . , s t , a t , r t , s t + 1 , . . . ..., s_t, a_t, r_t, s_{t+1}, ... ...,st,at,rt,st+1,... }, there should be:

V π ( s t ) = V π ( s t + 1 ) + r t V^\pi(s_t) = V^\pi(s_{t+1}) + r_t Vπ(st)=Vπ(st+1)+rt

so we can fit the NN by minimize the difference between V π ( s t ) − V π ( s t + 1 ) V^\pi(s_t) - V^\pi(s_{t+1}) Vπ(st)Vπ(st+1) and r t r_t rt.

Here is a tip in practice: we are training the same model V π V^\pi Vπ, so the two outputs V π ( s t ) V_\pi(s_t) Vπ(st) and V π ( s t + 1 ) V_\pi(s_{t+1}) Vπ(st+1) are all generate from one parameter group θ \theta θ. When we update the θ \theta θ after one iteration, both V π ( s t ) V_\pi(s_t) Vπ(st) and V π ( s t + 1 ) V_\pi(s_{t+1}) Vπ(st+1) are changed in next iteration, which makes the model unstable.

The tip is: fix the parameter group θ ′ \theta' θ to generate the V π ( S t + 1 ) V_\pi(S_{t+1}) Vπ(St+1), and update the θ \theta θ for V π ( S t ) V_\pi(S_t) Vπ(St). After N iterations, let the θ ′ \theta' θ equal to θ \theta θ. Fixed parameter Network (right) is called Target Network.

李宏毅Reinforcement Learning强化学习入门笔记_第7张图片
  • MC v.s. TD

    • Monte-Carlo has larger variance. This is caused by the randomness of G ( a ) G(a) G(a), since G ( a ) G(a) G(a) is the sum of all reward r t r_t rt, each r t r_t rt is a random variable, the sum of these variable must have a larger variance. Playing N times of one game with the same policy, the reward set { G ( a ) , G ( b ) , G ( c ) , . . . G(a), G(b), G(c), ... G(a),G(b),G(c),...} has a large variance.
    • Temporal-Difference also has a problem, which is V π ( s t + 1 ) V^\pi(s_{t+1}) Vπ(st+1) may estimate incorrectly (cause it’s not like Monte-Carlo approach to cumulative the reward until the end of this episode), so even the r t r_t rt is correct, the V π ( s t ) − V π ( s t + 1 ) V^\pi(s_t) - V^\pi(s_{t+1}) Vπ(st)Vπ(st+1) may not correct.

    In the practice, people prefer to use TD method.

  • Q-value approach → \rightarrow Q π ( s , a ) Q^\pi(s, a) Qπ(s,a)

In current state (observation), enumerate all valid actions and calculate the Q-value of each action.

note: In current state we force the actor to take the specific action to calculate the value this action, but random choose actions according to the π θ \pi_\theta πθ actor in next steps until the end of episode.

Use Q-value to learn an actor

We can learn an actor π \pi π with the Q-value function, here is the algorithm flow:

李宏毅Reinforcement Learning强化学习入门笔记_第8张图片
the question is: how to estimate the $\pi'$ is better than $\pi$?

If π ′ \pi' π is better than π \pi π, then:

V π ′ ( s i ) ⩾ V π ( s i ) , ∀ s i ∈ S V^{\pi'}(s_i) \geqslant V^{\pi}(s_i), \qquad \forall s_i \in S Vπ(si)Vπ(si),siS

We can use equation below to calculate the π ′ \pi' π from π \pi π:
π ′ ( s ) = a r g m a x a Q π ( s , a ) \pi'(s) = argmax_aQ^\pi(s, a) π(s)=argmaxaQπ(s,a)
Note: This approach not suitable for continuous action, only for discrete action.

But if we always choose the best action according to the Q π Q^\pi Qπ, some other better actions we can never detect. So we infer use Exploration method when we choose action to do.

Epsilon Greedy

Set a probability ε \varepsilon ε, take max Q-value action or take random action show as below. Typically , ε \varepsilon ε decreases as time goes by.
KaTeX parse error: No such environment: align at position 20: … \left\{ \begin{̲a̲l̲i̲g̲n̲}̲ argmaxQ(s, a)&…

Boltzmann Exploration

Since the Q π Q^\pi Qπis an Neural Network, the output of this Network is the probability of each action. Use this probability to decide which action should take, show as below:
P ( a i ∣ s ) = e x p ( Q ( s , a i ) ) ∑ a e x p ( Q ( s , a ) ) P(a_i|s) = \frac{exp(Q(s, a_i))}{\sum_aexp(Q(s, a))} P(ais)=aexp(Q(s,a))exp(Q(s,ai))
Q-value may be negative, so we take exp-function to let them be positive.

Replay Buffer

Replay buffer is a buffer which stores a lot of experience data. When you train your Q-Network, random choose a batch from buffer to fit it.

  • An experience is a set which looks like { s t , a t , r t , s t + 1 s_t, a_t, r_t, s_{t+1} st,at,rt,st+1}.

  • The experience in buffer may comes from different policy { π θ 1 , π θ 2 , π θ 3 , . . . \pi_{\theta_1}, \pi_{\theta_2}, \pi_{\theta_3}, ... πθ1,πθ2,πθ3,...}.

  • Drop the old experience when buffer is full.

李宏毅Reinforcement Learning强化学习入门笔记_第9张图片

Typical Q-Learning Algorithm

Here is the main algorithm flow of Q-learning:

  • Initialize Q-function Q, Initialize target Q-function Q ^ = Q \hat{Q} = Q Q^=Q
  • in each episode
    • for each step t
      • Given state s t s_t st, take an action a t a_t at based on Q ( ε \varepsilon ε-greedy exploration)
      • Obtain the reward r t r_t rt and next state s t + 1 s_{t+1} st+1
      • Store this experience { s t , a t , r t , s t + 1 s_t, a_t, r_t, s_{t+1} st,at,rt,st+1} into the replay buffer
      • Sample a batch of experience { ( s i , a i , r i , s i + 1 ) , ( s j , a j , r j , s j + 1 ) , . . . (s_i, a_i, r_i, s_{i+1}), (s_j, a_j, r_j, s_{j+1}), ... (si,ai,ri,si+1),(sj,aj,rj,sj+1),...} from buffer
      • Compute target y = r i + m a x a Q ^ ( s i + 1 , a ) y = r_i + max_a\hat{Q}(s_{i+1}, a_) y=ri+maxaQ^(si+1,a)
      • Update the parameters in Q Q Q to make Q ( s i , a i ) Q(s_i, a_i) Q(si,ai) close to y y y.
      • After N steps set Q ^ = Q \hat{Q} = Q Q^=Q

Double DQN

Double DQN is designed to solve the problem of DQN. Problem of DQN show as below:

李宏毅Reinforcement Learning强化学习入门笔记_第10张图片

Q-value are always over estimate in DQN training (Orange curve is DQN Neural Network output reward, Blue curve is Double DQN Neural Network output reward; Orange line is the real cumulative reward of DQN, Blue line is the real cumulative reward of Double DQN). Notes that Blue lines are over than Orange lines which means Double DQN has a greater true value than DQN.

Why DQN always over-estimate Q-value?

This because when we calculate the target y y y which equals r t + m a x a Q π ( s t + 1 , a ) r_t + max_aQ_\pi(s_{t+1}, a) rt+maxaQπ(st+1,a), we always choose the best action and compute the highest Q-value. This may over-estimate the target value, so the real cumulative reward may lower than that target value. While Q function is try to close the target value, this results the output of Q-Network is higher than the actual cumulative reward.
Q ( s t , a t ) ⟺ r t + m a x a Q ( s t + 1 , a ) Q(s_t, a_t) \qquad \Longleftrightarrow \qquad r_t + max_aQ(s_{t+1}, a) Q(st,at)rt+maxaQ(st+1,a)
Double DQN resolution

To avoid above problem, we use two Q-Network in training, one is in charge of choose the best action and the other is to estimate Q-value.
Q ( s t , a t ) ⟺ r t + Q ′ ( s t + 1 , a r g m a x a Q ( s t + 1 , a ) ) Q(s_t, a_t) \qquad \Longleftrightarrow \qquad r_t + Q'(s_{t+1}, argmax_aQ(s_{t+1}, a)) Q(st,at)rt+Q(st+1,argmaxaQ(st+1,a))
Here use Q Q Q to select the best action in each state but use Q ′ Q' Q to estimate the Q-value of this action. This method has two advantages:

  • If Q Q Q over-estimate the Q-value of action a a a, although this action is selected, the final Q-value of this action won’t be over estimated (because we use Q ′ Q' Q to estimate the Q-value of this action).
  • If Q ′ Q' Q over-estimate one action a a a, it’s also safe. Because the Q Q Q policy won’t select the action a a a (because a a a is not the best action in Policy Q Q Q).

In DQN algorithm, we already have two Network: origin Network θ \theta θ and target Network θ ′ \theta' θ (need to be fixed). So here use origin Network $\theta $ to select the action, and target Network $\theta’ $ to estimate the Q-value.


Other Advanced Structure of Q-Learning

  • Dueling DQN

Change the output as two parts: Q π ( s t , a t ) = V ( s t ) + A ( s t , a t ) Q^\pi(s_t, a_t) = V(s_t) + A(s_t, a_t) Qπ(st,at)=V(st)+A(st,at), which means the final Q-value is the sum of environment value and action value.

  • Prioritized Replay

When we sample a batch of experience from replay buffer, we don’t use random select. Prioritized Replay marked those experience which has a high loss after one iteration, and increase the probability of selecting those experience in the next batch.

  • Multi-Step

Change the experience format in the Replay Buffer, not only store one step {$ s_t, a_t, r_t, s_{t+1} $}, store N steps { s t , a t , r t , s t + 1 , . . . , s t + N , a t + N , r t + N , s t + N + 1 s_t, a_t, r_t, s_{t+1}, ..., s_{t+N}, a_{t+N}, r_{t+N}, s_{t+N+1} st,at,rt,st+1,...,st+N,at+N,rt+N,st+N+1 }.

  • Noise Net

This method used to explore more action. Add some noise in current Network Q Q Q at the beginning of one episode.

Here is the comparison of different algorithms:

李宏毅Reinforcement Learning强化学习入门笔记_第11张图片

A3C Method - Asynchronous Advantage Actor-Critic

Why we need A3C method? This is designed to solve the variance problem of Policy Gradient. In Policy Gradient method, even in the same state $ s_t $ and take the same action $ a_t $ N times, we may get very different result total reward G G G. This because randomness existed when we calculate the cumulative reward in below equation:
▽ R ( θ ) ‾ = 1 N ∑ n = 1 N ∑ t = 1 T ( ∑ t ′ = t T γ t ′ − t r t ′ n − b ) ▽ l o g P ( a t n ∣ s t n , θ ) \bigtriangledown\overline{R(\theta)} = \frac{1}{N}\sum_{n=1}^N\sum_{t=1}^T{(\sum_{t'=t}^T{\gamma^{t' -t}r_{t'}^n} - b)\bigtriangledown{logP(a_t^n|s_t^n, \theta)}} R(θ)=N1n=1Nt=1T(t=tTγttrtnb)logP(atnstn,θ)
This part $ (\sum_{t’=t}T{\gamma{t’ -t}r_{t’}^n} - b) $ could be very different for $ r_t $ is a random variable with big variance, the result may be like this:

李宏毅Reinforcement Learning强化学习入门笔记_第12张图片

unless we sample enough times to cover all possible rewards, the model could be stable —— but it’s hard to do this. If we can replace ∑ t ′ = t T γ t ′ − t r t ′ n \sum_{t'=t}^T{\gamma^{t' -t}r_{t'}^n} t=tTγttrtn with the expect E ( r t ′ n ) E(r_{t'}^n) E(rtn), then we can solve this problem.

Advantage Actor-Critic (A2C Method)

We have already introduced value-based method, the definition of Q π θ ( s t , a t ) Q^{\pi_\theta}(s_t, a_t) Qπθ(st,at) is the expect of total reward of taking action a t a_t at at current state s t s_t st. The definition of $ V^{\pi_\theta}(s_t) $ is the expect reward of current state s t s_t st (just the state value without specific which action should take). Now we change the equation:
▽ R ( θ ) ‾ = 1 N ∑ n = 1 N ∑ t = 1 T ( Q π θ ( s t n , a t n ) − V π θ ( s t n ) ) ▽ l o g P ( a t n ∣ s t n , θ ) \bigtriangledown\overline{R(\theta)} = \frac{1}{N}\sum_{n=1}^N\sum_{t=1}^T{(Q^{\pi_\theta}(s_t^n, a_t^n) - V^{\pi_\theta}(s_t^n))\bigtriangledown{logP(a_t^n|s_t^n, \theta)}} R(θ)=N1n=1Nt=1T(Qπθ(stn,atn)Vπθ(stn))logP(atnstn,θ)
note: Here we use state value to replace the baseline b.

We can infer the value of Q π θ ( s t , a t ) Q^{\pi_\theta}(s_t, a_t) Qπθ(st,at) from V π θ ( s t ) V^{\pi_\theta}(s_t) Vπθ(st) :
Q π ( s t , a t ) = E [ r t + V π ( s t + 1 ) ] → r t + V π ( s t + 1 ) Q^{\pi}(s_t, a_t) \quad = \quad E[r_t + V^\pi(s_{t+1})] \quad \rightarrow \quad r_t + V^\pi(s_{t+1}) Qπ(st,at)=E[rt+Vπ(st+1)]rt+Vπ(st+1)
We should use the expect because r t r_t rt is a random variable, but it’s hard to calculate, so we take off the expect. Now the equation becomes:
▽ R ( θ ) ‾ = 1 N ∑ n = 1 N ∑ t = 1 T ( r t + V π θ ( s t + 1 n ) − V π θ ( s t n ) ) ▽ l o g P θ ( a t n ∣ s t n ) \bigtriangledown\overline{R(\theta)} = \frac{1}{N}\sum_{n=1}^N\sum_{t=1}^T{(r_t + V^{\pi_\theta}(s_{t+1}^n) - V^{\pi_\theta}(s_t^n))\bigtriangledown{logP_{\theta}(a_t^n|s_t^n)}} R(θ)=N1n=1Nt=1T(rt+Vπθ(st+1n)Vπθ(stn))logPθ(atnstn)
Algorithm flow of Advantage Actor-Critic method show as below:

李宏毅Reinforcement Learning强化学习入门笔记_第13张图片

Tips

  1. There are two Networks to train in this algorithm: Actor $ \pi_\theta $ and Critic V π ( s ) V^\pi(s) Vπ(s). But two networks accept the same input s s s, only different in output —— scaler V ( s ) V(s) V(s) for Critic Network and Probability Distribution P ( a ∣ s ) P(a|s) P(as) for Actor Network. So two networks can share some layers in the front of structure, looks like this:
李宏毅Reinforcement Learning强化学习入门笔记_第14张图片
  1. Use output entropy as regularization for π ( s ) \pi(s) π(s), this could make the probability of each action more even so that the model can do more exploration.

Asynchronous Advantage Actor-Critic (A3C Method)

A3C is designed to speed up A2C. It maintain one global Network and create N workers, each worker interact with different environment, calculate the gradient and update the global Network.

李宏毅Reinforcement Learning强化学习入门笔记_第15张图片
  • Copy global parameters θ 1 \theta_1 θ1
  • Sampling some data
  • Compute gradients
  • Update global model

note: All workers are parallelized, which means when ▽ θ 1 \bigtriangledown{\theta_1} θ1 finish compute and send back to global model, θ \theta θ may changed (updated by other worker, so it may not remain $\theta_1 $). But we still use the ▽ θ 1 \bigtriangledown{\theta_1} θ1 to update the current parameters θ \theta θ → \rightarrow θ + η ▽ θ 1 \theta + \eta\bigtriangledown{\theta_1} θ+ηθ1.

Sparse Reward

In reinforcement learning, reward is very important for agent to know which actions are good. But only few action could obtain a positive reward (ex. only fire and destroy the enemy could obtain a positive reward in Spaceship game), most of actions have no reward (ex. move left or move right). This phenomenon is called Sparse Reward.

Reward Shaping

Typically, few state could get the positive reward in training, thus we can create some extra rewards to guide the agent do some action in current state. For example, if we want to train an plane agent to destroy the enemy plane, the actual reward should be obtained from “fire and destroy the enemy”. But in the start of the game, our plane don’t know how to find enemy, so we can create an extra positive reward if our plane is fly toward enemy plane.

note: This method needs domain knowledge to design which rules desire positive reward and how much reward should be assigned.

Curriculum Learning

Typically, a hard task could be split into many simple tasks. Curriculum Learning Algorithm is starting from simple training examples, and then becoming harder and harder.

The most common technique is Reverse Curriculum Learning, explain as below:

  • Given a goal state s g s_g sg.
  • Sample some states { s 1 , s 2 , . . . s_1, s_2, ... s1,s2,...} close to $s_g $.
  • Each state has a reward to goal state s g s_g sg, compute { R ( s 1 ) , R ( s 2 ) , . . . R(s_1), R(s_2), ... R(s1),R(s2),...}.
  • Delete those state whose reward is too large (it’s too easy from this state to goal state) or too small (it’s too difficult from this state to goal state).
  • Sample more states near the {$s_1, s_2, … }.
李宏毅Reinforcement Learning强化学习入门笔记_第16张图片

Hierarchical Reinforcement Learning

A entire model could be split into different hierarchies, top-level model only give the top-level order and low-level model choose the actual actions. For example, if we wanna train a plane agent, top-level model only give the way point of next target while the low-level model control the plane to fly to that target (turn left or turn right).

李宏毅Reinforcement Learning强化学习入门笔记_第17张图片

Here is a game example, blue point is the agent which is asked to reach the yellow point. Pink point is the temporary target given by high-level model while the low-level mode follow this instruction and control the agent to reach pink point.

ICM —— Intrinsic Curiosity Module

ICM Algorithm can let model to do more exploration. It adds an extra Reward function r t i r^i_t rti which accept three parameters ( s t , a t , s t + 1 ) (s_t, a_t, s_{t+1}) (st,at,st+1). The Network need to maximize the sum value ∑ t = 1 N ( r t i + r t ) \sum_{t=1}^N(r^i_t + r_t) t=1N(rti+rt).

李宏毅Reinforcement Learning强化学习入门笔记_第18张图片

Now let’s see how ICM calculate reward r t i r^i_t rti in each step t t t :

李宏毅Reinforcement Learning强化学习入门笔记_第19张图片

Here are two Networks in ICM module:

  • Network 1: This Network is used to predict the next state s t + 1 s_{t+1} st+1 after taking action a a a, if s t + 1 s_{t+1} st+1 is hard to predict, then the reward r i r^i ri is high.
  • Network 2: There may be a lot of features are not related to do actions (ex. the position of sun in the Spaceship War game). If we just maximize the reward of state which hard to predict, then the agent will stay and watch the sun moving. So we need to find the useful features in action chosen, this is the work of Network 2. It predict the action $a_t $ according to s t s_t st and s t + 1 s_{t+1} st+1.

你可能感兴趣的:(Deep,Reinforcement,Learning,深度学习,神经网络,强化学习)