policy gradient详解(附代码)

1 引言

policy gradient是强化学习中一种基于概率策略的方法。智能体通过与环境的交互获得特定时刻的状态信息,并直接给出下一步要采取各种动作的概率,然后根据该状态动作的策略分布采取下一步的行动,所以每种动作都有可能被选中,只是选中的概率性不同。智能体直接学习状态动作的策略分布,在强化学习的训练中,用神经网络来表示状态动作分布,给一个状态,就会输出该状态下的动作分布。强化学习算法直接对策略进行优化,使指定的策略能够获得最大的奖励。

2 policy gradient原理介绍

考虑一个随机参数化的策略 π θ \pi_\theta πθ,强化学习主要目标是最大化期望回报 J ( π θ ) = E τ ∼ π θ [ R ( τ ) ] J(\pi_\theta)=\mathbb{E}_{\tau \sim \pi_\theta}[R(\tau)] J(πθ)=Eτπθ[R(τ)]其中 τ = ( s 0 , a 0 , s 1 , a 1 , ⋯   , s T + 1 ) \tau=(s_0,a_0,s_1,a_1,\cdots,s_{T+1}) τ=(s0,a0,s1,a1,,sT+1) s i s_i si a i a_i ai分别表示第 i i i时刻的状态和动作。 R ( τ ) = ∑ t = 0 T r t R(\tau)=\sum\limits_{t=0}^Tr_t R(τ)=t=0Trt表示 T T T时刻内的回报, r t r_t rt表示第 t t t时刻的回报。通过梯度上升法优化策略即有 θ k + 1 = θ k + α ∇ θ J ( π θ ) ∣ θ k \theta_{k+1}=\theta_{k}+\alpha \nabla_\theta J(\pi_\theta)|_{\theta_k} θk+1=θk+αθJ(πθ)θk其中 ∇ θ J ( π θ ) \nabla_\theta J(\pi_\theta) θJ(πθ)表示策略梯度。策略梯度的具体导数形式的推导如下所示。给定策略 π θ \pi_\theta πθ,轨迹 τ \tau τ的概率为 P ( τ ∣ θ ) = ρ 0 ( s 0 ) ∏ t = 0 T P ( s t + 1 ∣ s t , a t ) π θ ( a t ∣ s t ) P(\tau|\theta)=\rho_0(s_0)\prod_{t=0}^TP(s_{t+1}|s_t,a_t)\pi_\theta(a_t|s_t) P(τθ)=ρ0(s0)t=0TP(st+1st,at)πθ(atst)其中 ρ 0 ( ⋅ ) \rho_0(\cdot) ρ0()表示状态分布。根据链式法则则有 ∇ θ P ( τ ∣ θ ) = P ( τ ∣ θ ) ∇ θ log ⁡ P ( τ ∣ θ ) \nabla_\theta P(\tau|\theta)=P(\tau|\theta)\nabla_\theta \log P(\tau|\theta) θP(τθ)=P(τθ)θlogP(τθ)进一步可知轨迹的对数概率表示为 log ⁡ P ( τ ∣ θ ) = log ⁡ ρ 0 ( s 0 ) + ∑ t = 0 T ( log ⁡ P ( s t + 1 ∣ s t , a t ) + log ⁡ π θ ( a t ∣ s t ) ) \log P(\tau|\theta)=\log \rho_0 (s_0)+\sum\limits_{t=0}^T\left(\log P(s_{t+1}|s_t,a_t)+\log \pi_\theta(a_t|s_t)\right) logP(τθ)=logρ0(s0)+t=0T(logP(st+1st,at)+logπθ(atst))因为 ρ 0 ( s 0 ) \rho_0(s_0) ρ0(s0) P ( s t + 1 ∣ s t , a t ) P(s_{t+1}|s_t,a_t) P(st+1st,at)与策略参数 θ \theta θ无关 ,所以它们的梯度为 0 0 0,进而可知轨迹的对数概率梯度表示为 ∇ θ log ⁡ P ( τ ∣ θ ) = ∇ θ log ⁡ ρ 0 ( s 0 ) + ∑ t = 0 T ( ∇ θ log ⁡ P ( s t + 1 ∣ s t , a t ) + ∇ θ log ⁡ π θ ( a t ∣ s t ) ) = ∑ t = 0 T ∇ θ log ⁡ π θ ( a t ∣ s t ) \begin{aligned}\nabla_\theta \log P(\tau|\theta)&=\nabla_\theta \log \rho_0(s_0)+\sum\limits_{t=0}^T\left(\nabla_\theta \log P(s_{t+1}|s_t,a_t)+\nabla_\theta \log \pi_\theta(a_t|s_t)\right)\\&=\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)\end{aligned} θlogP(τθ)=θlogρ0(s0)+t=0T(θlogP(st+1st,at)+θlogπθ(atst))=t=0Tθlogπθ(atst)综上所述可得 ∇ θ J ( π θ ) = ∇ θ E τ ∼ π θ [ R ( τ ) ] = ∇ θ ∫ τ P ( τ ∣ θ ) R ( τ ) = ∫ τ ∇ θ P ( τ ∣ θ ) R ( τ ) = ∫ τ P ( τ ∣ θ ) ∇ θ log ⁡ P ( τ ∣ θ ) R ( τ ) = E τ ∼ π θ [ ∇ θ log ⁡ P ( τ ∣ θ ) R ( τ ) ] = E τ ∼ π θ [ ∑ t = 0 T ∇ θ log ⁡ π θ ( a t ∣ s t ) R ( τ ) ] \begin{aligned}\nabla_\theta J(\pi_\theta)&=\nabla_\theta \mathbb{E}_{\tau\sim \pi_\theta}[R(\tau)]\\&=\nabla_\theta \int_\tau P(\tau|\theta)R(\tau)\\&=\int_\tau \nabla_\theta P(\tau|\theta)R(\tau)\\&=\int_\tau P(\tau|\theta)\nabla_\theta \log P(\tau|\theta)R(\tau)\\&=\mathbb{E}_{\tau\sim \pi_\theta}[\nabla_\theta \log P(\tau|\theta)R(\tau)]\\&=\mathbb{E}_{\tau\sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)R(\tau)\right]\end{aligned} θJ(πθ)=θEτπθ[R(τ)]=θτP(τθ)R(τ)=τθP(τθ)R(τ)=τP(τθ)θlogP(τθ)R(τ)=Eτπθ[θlogP(τθ)R(τ)]=Eτπθ[t=0Tθlogπθ(atst)R(τ)]以上公式是期望的表示形式,可以通过对蒙特卡洛模拟采样均值来估计它的值。假定采样得到一个轨迹集合 D = { τ i } i = 1 , ⋯   , N \mathcal{D}=\{\tau_i\}_{i=1,\cdots,N} D={τi}i=1,,N,其中每个轨迹都是在策略 π θ \pi_\theta πθ下智能体与环境交互得到的,此时估计的策略梯度表示为 g ^ = 1 ∣ D ∣ ∑ τ ∈ D ∑ t = 0 T ∇ θ log ⁡ π θ ( a t ∣ s t ) R ( τ ) \hat{g}=\frac{1}{|\mathcal{D}|}\sum\limits_{\tau\in\mathcal{D}}\sum\limits_{t=0}^T \nabla_\theta \log \pi_\theta(a_t|s_t)R(\tau) g^=D1τDt=0Tθlogπθ(atst)R(τ)其中 ∣ D ∣ |\mathcal{D}| D表示轨迹集合 D \mathcal{D} D的元素个数。

3 EGLP引理

根据策略梯度可以推导出一个中间结果为对数概率梯度的期望(Expected Grad-Log-Prob)引理。

EGLP引理: 假定 P θ P_\theta Pθ是随机变量 x x x的参数化的概率分布,进而则有 E x ∼ P θ [ ∇ θ log ⁡ P θ ( x ) ] = 0 \mathbb{E}_{x\sim P_\theta}[\nabla_\theta \log P_\theta(x)]=0 ExPθ[θlogPθ(x)]=0

证明: 由概率分布的特性可知 ∫ x P θ ( x ) = 1 \int_x P_\theta(x)=1 xPθ(x)=1对以上等式两边求梯度可知 ∇ θ ∫ x P θ ( x ) = ∇ θ 1 = 0 \nabla_\theta \int_x P_\theta(x)=\nabla_\theta 1 = 0 θxPθ(x)=θ1=0由对数导数可知 0 = ∇ θ ∫ x P θ ( x ) = ∫ x ∇ P θ ( x ) = ∫ x P θ ( x ) ∇ θ log ⁡ P θ ( x ) \begin{aligned}0&=\nabla_\theta \int_x P_\theta (x)\\&=\int_x \nabla P_\theta(x)\\&=\int_x P_\theta(x) \nabla_\theta\log P_\theta(x)\end{aligned} 0=θxPθ(x)=xPθ(x)=xPθ(x)θlogPθ(x)所以可知 E x ∼ P θ [ ∇ θ log ⁡ P θ ( x ) ] = 0 \mathbb{E}_{x\sim P_\theta}[\nabla_\theta\log P_\theta(x)]=0 ExPθ[θlogPθ(x)]=0

4 policy gradient改进版

已知策略梯度的表达式为 ∇ θ J ( π θ ) = E τ ∼ π θ [ ∑ t = 0 T ∇ θ log ⁡ π θ ( a t ∣ s t ) R ( τ ) ] \nabla_\theta J(\pi_\theta)=\mathbb{E}_{\tau\sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)R(\tau)\right] θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)R(τ)]由以上公式可知,在一个状态动作轨迹里,当执行下一步动作的时,得到的回报是固定的,即为总回报 R τ R_{\tau} Rτ,这有背与常识。根据经验可知,一个智能体应该根据其执行动作之后带来结果再对当前的行为进行更新,跟行动之前的结果好坏无关,所以可以得到改进版的policy gradient的公式为 ∇ θ J ( π θ ) = E τ ∼ π θ [ ∑ t = 0 T ∇ θ log ⁡ π θ ( a t ∣ s t ) ∑ t ′ = t T R ( s t ′ , a t ′ , s t ′ + 1 ) ] \nabla_\theta J(\pi_\theta)=\mathbb{E}_{\tau\sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)\sum\limits_{t^\prime =t}^TR(s_{t^\prime},a_{t^\prime},s_{t^\prime +1})\right] θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)t=tTR(st,at,st+1)]以上公式表明行动只会根据采取行动后获得的奖励来加强,其中 R ^ t = ∑ t ′ = t T R ( s t ′ , a t ′ , s t ′ + 1 ) \hat{R}_t=\sum\limits_{t^\prime = t}^TR(s_{t^\prime},a_{t^\prime},s_{t^\prime+1}) R^t=t=tTR(st,at,st+1)该公式表示在 t t t时刻之后的奖励回报。

根据EGLP引理可推知,对于任意只依赖状态的函数 b ( ⋅ ) b(\cdot) b(),可得 E a t ∼ π θ [ ∇ θ log ⁡ π θ ( a t ∣ s t ) b ( s t ) ] = 0 \mathbb{E}_{a_t \sim\pi_\theta}[\nabla_\theta \log \pi_\theta (a_t|s_t)b(s_t)]=0 Eatπθ[θlogπθ(atst)b(st)]=0这可以在改进版的策略梯度公式中随意加上或者减去函数 b ( ⋅ ) b(\cdot) b(),即有 ∇ θ J ( π θ ) = E τ ∼ π θ [ ∑ t = 0 T ∇ θ log ⁡ π θ ( a t ∣ s t ) ( ∑ t ′ = t T R ( s t ′ , a t ′ , s t ′ + 1 ) − b ( s t ) ) ] \nabla_\theta J(\pi_\theta)=\mathbb{E}_{\tau\sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)\left(\sum\limits_{t^\prime =t}^TR(s_{t^\prime},a_{t^\prime},s_{t^\prime +1})-b(s_t)\right)\right] θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)(t=tTR(st,at,st+1)b(st))]其中 b ( s ) b(s) b(s)被称作baseline函数。一般情况下baseline函数会选择状态值函数 V π ( s ) V^\pi(s) Vπ(s)

5 policy gradient其它形式

policy graident的通用形式如下公式所示 ∇ θ J ( π θ ) = E τ ∼ π θ [ ∑ t = 0 T ∇ θ log ⁡ π θ ( a t ∣ s t ) Φ t ] \nabla_\theta J(\pi_\theta)=\mathbb{E}_{\tau \sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta (a_t|s_t)\Phi_t\right] θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)Φt]根据 Φ t \Phi_t Φt函数的不同可以将policy gradient种类划分为如下形式

  • 轨迹 τ \tau τ总回报函数: Φ t = R ( τ ) \Phi_t=R(\tau) Φt=R(τ)
  • 轨迹 τ \tau τ t t t时刻后回报函数: Φ t = ∑ t ′ = t T R ( s t ′ , a t ′ , s t ′ + 1 ) \Phi_t=\sum\limits_{t^\prime = t}^T R(s_{t^\prime},a_{t^\prime},s_{t^\prime+1}) Φt=t=tTR(st,at,st+1)
  • 状态值函数: Φ t = V π ( s ) = E τ ∼ π [ R ( τ ) ∣ s 0 = s ] \Phi_t=V^\pi(s)=\mathbb{E}_{\tau\sim \pi}[R(\tau)|s_0=s] Φt=Vπ(s)=Eτπ[R(τ)s0=s]
  • 动作状态值函数: Φ t = Q π ( s , a ) = E τ ∼ π [ R ( τ ) ∣ s 0 = s , a 0 = a ] \Phi_t=Q^\pi(s,a)=\mathbb{E}_{\tau\sim \pi}[R(\tau)|s_0=s,a_0=a] Φt=Qπ(s,a)=Eτπ[R(τ)s0=s,a0=a]
  • 优势函数: Φ t = A π ( s t , a t ) = Q π ( s t , a t ) − V π ( s t ) \Phi_t=A^\pi (s_t,a_t)=Q^\pi(s_t,a_t)-V^\pi(s_t) Φt=Aπ(st,at)=Qπ(st,at)Vπ(st)

6 程序代码

policy gradient的pytorch代码实现如下所示,此代码实现了的policy gradient是以下的形式 ∇ θ J ( π θ ) = E τ ∼ π θ [ ∑ t = 0 T ∇ θ log ⁡ π θ ( a t ∣ s t ) ∑ t ′ = t T R ( s t ′ , a t ′ , s t ′ + 1 ) ] \nabla_\theta J(\pi_\theta)=\mathbb{E}_{\tau\sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)\sum\limits_{t^\prime =t}^TR(s_{t^\prime},a_{t^\prime},s_{t^\prime +1})\right] θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)t=tTR(st,at,st+1)]在以下文件RL_template.py中进行了实现。

import numpy as np  
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical


class Policy(nn.Module):
    def __init__(self, in_features = 4, hid_features = 128, out_features = 2, pro = 0.6):
        super(Policy, self).__init__()
        self.fc1 = nn.Linear(in_features,hid_features)
        self.dropout = nn.Dropout(p = pro)
        self.fc2 = nn.Linear(hid_features, out_features)
 
    def forward(self, x):
        x = self.fc1(x)
        x = self.dropout(x)
        x = F.relu(x)
        action_scores = self.fc2(x)
        return F.softmax(action_scores,dim=1)


class PolicyGradient(object):
	def __init__(
		self, 
		policy_net,
		learning_rate = 0.01, 
		reward_decay = 0.95
	):
		
		self.policy_net = policy_net
		self.lr = learning_rate
		self.gamma = reward_decay

		self.ep_ss = []
		self. ep_as= []
		self.ep_rs = []
		self.ep_log_pros = []

	def choose_action(self, state):
		state = torch.from_numpy(state).float().unsqueeze(0)
		probs = self.policy_net(state)
		m = Categorical(probs)
		action = m.sample()
		# m.log_prob(action) <===> probs.log()[0][action.item()].unsqueeze(0)
		self.ep_log_pros.append(m.log_prob(action)) 
		return action.item()


	def store_transition(self, s, a, r):
		self.ep_ss.append(s)
		self.ep_as.append(a)
		self.ep_rs.append(r)


	def eposide_learning(self):
		optimizer = optim.Adam(self.policy_net.parameters(),lr=1e-2)
		eps = np.finfo(np.float32).eps.item()  
		R = 0
		policy_loss = []
		returns = []

		for r in self.ep_rs[::-1]:
			R = r + self.gamma * R
			returns.insert(0, R)  

		returns = torch.tensor(returns)
		returns = (returns - returns.mean()) / (returns.std() + eps)

		for log_prob, R in zip(self.ep_log_pros, returns):
			policy_loss.append(-log_prob * R)

		policy_loss = torch.cat(policy_loss).sum()

		optimizer.zero_grad()
		policy_loss.backward()
		optimizer.step()

		del self.ep_rs[:]
		del self.ep_log_pros[:]
		del self.ep_as[:]
		del self.ep_ss[:]

以下代码可以针对不用游戏环境进行强化学习。

import argparse
from RL_template import PolicyGradient, Policy
import gym


parser = argparse.ArgumentParser(description='Pytorch REINFORCE example')
parser.add_argument('--render',action='store_false')
parser.add_argument('--episodes', type=int, default=1000)
parser.add_argument('--steps_per_episode', type=int, default=100)
parser.add_argument('--gamma', type=float, default=0.99)
parser.add_argument('--seed',type=int, default=543)
args = parser.parse_args()

# env = gym.make('CartPole-v1')
env = gym.make('MountainCar-v0')


policy_net = Policy(
		in_features = env.observation_space.shape[0],
		out_features = env.action_space.n
		)

print(env.action_space)
print(env.observation_space)
print(env.observation_space.high)
print(env.observation_space.low)


RL = PolicyGradient(
	learning_rate = 0.002,
	policy_net = policy_net)

for episode in range(args.episodes):

	state, ep_reward = env.reset(), 0

	while True:
	# for t in range(args.steps_per_episode):
		if args.render:
			env.render()

		action = RL.choose_action(state)

		state, reward, done, info = env.step(action)

		RL.store_transition(state, action, reward)

		ep_reward += reward

		if done == True:
			break

	RL.eposide_learning()

	print('Episode {}\tLast reward: {:.2f}'.format(episode, ep_reward))

你可能感兴趣的:(论文解读,人工智能,算法)