置信域优化法是数值最优化领域中一类经典的算法,历史至少可以追溯到 1970 年。其出发点是:如果对目标函数 J ( θ ) J(\theta) J(θ) 进行优化过于困难,不妨构造一个替代函数 L ( θ ∣ θ n o w ) L(\theta|\theta_{now}) L(θ∣θnow),要求替代函在 θ \theta θ 的当前值 θ n o w \theta_{now} θnow 的邻域 N ( θ n o w ) \mathcal{N}(\theta_{now}) N(θnow) 内和 J ( θ ) J(\theta) J(θ) 十分相似的,通过在这个局部范围内最优化 L ( θ ∣ θ n o w ) L(\theta|\theta_{now}) L(θ∣θnow) 来更新一次 θ \theta θ 值,反复迭代上述过程直到收敛
其中 N ( θ n o w ) \mathcal{N}(\theta_{now}) N(θnow) 就被称作
置信域
,顾名思义,在 θ n o w \theta_{now} θnow 的邻域上我们可以信任 L ( θ ∣ θ n o w ) L(\theta|\theta_{now}) L(θ∣θnow),可以拿它来替代目标函数 J ( θ ) J(\theta) J(θ)
具体而言每轮迭代可以分成两步
注意到置信域半径控制着每一轮迭代中 θ \theta θ 变化的上限,我们通常会让这个半径随优化过程不断减小来避免 overstep
置信域方法是一种算法框架而非一个具体的算法。有很多种方式实现实现置信域方法:
每轮迭代中,求解以下约束优化问题
max θ L ( θ ∣ θ n o w ) s.t θ ∈ N ( θ n o w ) . \max_\theta L(\theta|\theta_{now}) \quad \text{s.t} \quad\theta\in\mathcal{N}(\theta_{now}). θmaxL(θ∣θnow)s.tθ∈N(θnow). 我们认为在置信域 N ( θ n o w ) \mathcal{N}(\theta_{now}) N(θnow) 内 d π θ n o w d^{\pi_{\theta_{now}}} dπθnow 近似 d π θ d^{\pi_{\theta}} dπθ,这个约束越紧,就越能避免 1.1 节的 overstep 问题
邻域(置信域) N ( θ n o w ) \mathcal{N}(\theta_{now}) N(θnow) 的选取方法通常有两种
综上得到每轮迭代的约束优化问题为
max θ 1 n ∑ t = 1 n π θ ( a t ∣ s t ) π θ n o w ( a t ∣ s t ) ⋅ u t s.t. 1 n ∑ t = 1 n D KL [ π θ n o w ( ⋅ ∣ s t ) ∣ ∣ π θ ( ⋅ ∣ s t ) ] ≤ △ where s t ∼ d π θ n o w , a t ∼ π θ n o w ( ⋅ ∣ s t ) \begin{aligned} &\max_\theta&&\frac{1}{n}\sum_{t=1}^n \frac{ \pi_{\theta}(a_t|s_t) }{\pi_{\theta_{now}}(a_t|s_t)}\cdot u_t \\ &\text{s.t.} &&\frac{1}{n} \sum_{t=1}^n D_\text{KL}\big[\pi_{\theta_{now}}(·|s_t) || \pi_{\theta}(·|s_t) \big] \leq \triangle \\ & \text{where} && s_t\sim d^{\pi_{\theta_{now}}}, a_t\sim \pi_{\theta_{now}}(·|s_t) \end{aligned} θmaxs.t.wheren1t=1∑nπθnow(at∣st)πθ(at∣st)⋅utn1t=1∑nDKL[πθnow(⋅∣st)∣∣πθ(⋅∣st)]≤△st∼dπθnow,at∼πθnow(⋅∣st) 这个问题求解起来很麻烦,大概思路是
其中二阶泰勒展开带来的黑塞矩阵尺寸很大,编程时要使用共轭梯度法进行处理;另外由于泰勒展开近似得不到精确解,还要用线性搜索来确保约束条件满足,这些问题导致 TPRO 实现复杂,没有大规模流行
用当前策略 π θ n o w \pi_{\theta_{now}} πθnow 诱导的状态分布 d π θ n o w d^{\pi_{\theta_{now}}} dπθnow 近似 d π θ d^{\pi_{\theta}} dπθ,原始优化目标近似为
E S ∼ d π θ n o w [ E A ∼ π θ n o w ( ⋅ ∣ S ) [ π θ ( A ∣ S ) π θ n o w ( A ∣ S ) ⋅ A π θ n o w ( S , A ) ] ] \mathbb{E}_{S\sim d^{\pi_{{\theta_{now}}}}}\left[\mathbb{E}_{A\sim\pi_{\theta_{now}}(·|S)}\left[\frac{ \pi_{\theta}(A|S) }{\pi_{\theta_{now}}(A|S)}\cdot A_{\pi_{\theta_{now}}}(S,A)\right]\right] ES∼dπθnow[EA∼πθnow(⋅∣S)[πθnow(A∣S)πθ(A∣S)⋅Aπθnow(S,A)]]
用 MC 近似消去上式中的两个期望。具体而言,先用当前策略 π θ n o w \pi_{\theta_{now}} πθnow 和环境交互收集一条轨迹
s 1 , a 1 , r 1 , s 2 , a 2 , r 2 , . . . , s n , a n , r n s_1, a_1, r_1, s_2, a_2, r_2,...,s_n, a_n, r_n s1,a1,r1,s2,a2,r2,...,sn,an,rn 此轨迹满足 s t ∼ d π θ n o w , a t ∼ π θ n o w ( ⋅ ∣ s t ) s_t\sim d^{\pi_{\theta_{now}}}, a_t\sim \pi_{\theta_{now}}(·|s_t) st∼dπθnow,at∼πθnow(⋅∣st),故每个 ( s t , a t ) (s_t,a_t) (st,at) 二元组都能构造一个无偏 MC 估计
π θ ( a t ∣ s t ) π θ n o w ( a t ∣ s t ) ⋅ A π θ n o w ( a t , s t ) \frac{ \pi_{\theta}(a_t|s_t) }{\pi_{\theta_{now}}(a_t|s_t)}\cdot A_{\pi_{\theta_{now}}}(a_t,s_t) πθnow(at∣st)πθ(at∣st)⋅Aπθnow(at,st) 用这些无偏估计的期望(均值)来近似原始优化目标,得到
1 n ∑ t = 1 n π θ ( a t ∣ s t ) π θ n o w ( a t ∣ s t ) ⋅ A π θ n o w ( a t , s t ) \frac{1}{n}\sum_{t=1}^n \frac{ \pi_{\theta}(a_t|s_t) }{\pi_{\theta_{now}}(a_t|s_t)}\cdot A_{\pi_{\theta_{now}}}(a_t,s_t) n1t=1∑nπθnow(at∣st)πθ(at∣st)⋅Aπθnow(at,st)
最后我们考虑如何估计优势函数 A π θ n o w ( a t , s t ) A_{\pi_{\theta_{now}}}(a_t,s_t) Aπθnow(at,st)。目前比较常用的方法是 广义优势估计(Generalized Advantage Estimation,GAE)
,先简介一下 GAE
首先将 TD Error 表示为 δ t = r t + γ V ( s t + 1 ) − V ( s t ) \delta_t = r_t + \gamma V(s_{t+1})-V(s_t) δt=rt+γV(st+1)−V(st),其中 V V V 是一个已经学习的状态价值函数,根据多步 TD 思想有
A t ( 1 ) = δ t = − V ( s t ) + r t + γ V ( s t + 1 ) A t ( 2 ) = δ t + γ δ t + 1 = − V ( s t ) + r t + γ r t + 1 + γ 2 V ( s t + 2 ) A t ( 3 ) = δ t + γ δ t + 1 + γ 2 δ t + 2 = − V ( s t ) + r t + γ r t + 1 + γ 2 r t + 2 + γ 3 V ( s t + 3 ) ⋮ ⋮ A t ( k ) = ∑ l = 0 k − 1 γ l δ t + l = − V ( s t ) + r t + γ r t + 1 + … + γ k − 1 r t + k − 1 + γ k V ( s t + k ) \begin{array}{ll} A_{t}^{(1)}=\delta_{t} & =-V\left(s_{t}\right)+r_{t}+\gamma V\left(s_{t+1}\right) \\ A_{t}^{(2)}=\delta_{t}+\gamma \delta_{t+1} & =-V\left(s_{t}\right)+r_{t}+\gamma r_{t+1}+\gamma^{2} V\left(s_{t+2}\right) \\ A_{t}^{(3)}=\delta_{t}+\gamma \delta_{t+1}+\gamma^{2} \delta_{t+2} & =-V\left(s_{t}\right)+r_{t}+\gamma r_{t+1}+\gamma^{2} r_{t+2}+\gamma^{3} V\left(s_{t+3}\right) \\ \vdots & \vdots \\ A_{t}^{(k)}=\sum_{l=0}^{k-1} \gamma^{l} \delta_{t+l} & =-V\left(s_{t}\right)+r_{t}+\gamma r_{t+1}+\ldots+\gamma^{k-1} r_{t+k-1}+\gamma^{k} V\left(s_{t+k}\right) \end{array} At(1)=δtAt(2)=δt+γδt+1At(3)=δt+γδt+1+γ2δt+2⋮At(k)=∑l=0k−1γlδt+l=−V(st)+rt+γV(st+1)=−V(st)+rt+γrt+1+γ2V(st+2)=−V(st)+rt+γrt+1+γ2rt+2+γ3V(st+3)⋮=−V(st)+rt+γrt+1+…+γk−1rt+k−1+γkV(st+k) GAE 将这些不同步数的优势估计进行指数加权平均:
A t G A E = ( 1 − λ ) ( A t ( 1 ) + λ A t ( 2 ) + λ 2 A t ( 3 ) + ⋯ ) = ( 1 − λ ) ( δ t + λ ( δ t + γ δ t + 1 ) + λ 2 ( δ t + γ δ t + 1 + γ 2 δ t + 2 ) + ⋯ ) = ( 1 − λ ) ( δ ( 1 + λ + λ 2 + ⋯ ) + γ δ t + 1 ( λ + λ 2 + λ 3 + ⋯ ) + γ 2 δ t + 2 ( λ 2 + λ 3 + λ 4 + … ) + ⋯ ) = ( 1 − λ ) ( δ t 1 1 − λ + γ δ t + 1 λ 1 − λ + γ 2 δ t + 2 λ 2 1 − λ + ⋯ ) = ∑ l = 0 ∞ ( γ λ ) l δ t + l \begin{aligned} A_{t}^{G A E} & =(1-\lambda)\left(A_{t}^{(1)}+\lambda A_{t}^{(2)}+\lambda^{2} A_{t}^{(3)}+\cdots\right) \\ & =(1-\lambda)\left(\delta_{t}+\lambda\left(\delta_{t}+\gamma \delta_{t+1}\right)+\lambda^{2}\left(\delta_{t}+\gamma \delta_{t+1}+\gamma^{2} \delta_{t+2}\right)+\cdots\right) \\ & =(1-\lambda)\left(\delta\left(1+\lambda+\lambda^{2}+\cdots\right)+\gamma \delta_{t+1}\left(\lambda+\lambda^{2}+\lambda^{3}+\cdots\right)+\gamma^{2} \delta_{t+2}\left(\lambda^{2}+\lambda^{3}+\lambda^{4}+\ldots\right)+\cdots\right) \\ & =(1-\lambda)\left(\delta_{t} \frac{1}{1-\lambda}+\gamma \delta_{t+1} \frac{\lambda}{1-\lambda}+\gamma^{2} \delta_{t+2} \frac{\lambda^{2}}{1-\lambda}+\cdots\right) \\ & =\sum_{l=0}^{\infty}(\gamma \lambda)^{l} \delta_{t+l} \end{aligned} AtGAE=(1−λ)(At(1)+λAt(2)+λ2At(3)+⋯)=(1−λ)(δt+λ(δt+γδt+1)+λ2(δt+γδt+1+γ2δt+2)+⋯)=(1−λ)(δ(1+λ+λ2+⋯)+γδt+1(λ+λ2+λ3+⋯)+γ2δt+2(λ2+λ3+λ4+…)+⋯)=(1−λ)(δt1−λ1+γδt+11−λλ+γ2δt+21−λλ2+⋯)=l=0∑∞(γλ)lδt+l 其中, λ ∈ [ 0 , 1 ] \lambda \in[0,1] λ∈[0,1] 是在 GAE 中额外引入的一个超参数
- λ = 0 \lambda=0 λ=0 时, A t G A E = δ t = r t + γ V ( s t + 1 ) − V ( s t ) A_{t}^{G A E} = \delta_t = r_t + \gamma V(s_{t+1})-V(s_t) AtGAE=δt=rt+γV(st+1)−V(st) 是仅仅只看一步差分得到的优势
- λ = 1 \lambda=1 λ=1 时, A t G A E = ∑ l = 0 ∞ γ l δ t + l = ∑ l = 0 ∞ γ l r t + l − V ( s t ) A_{t}^{G A E}=\sum_{l=0}^{\infty} \gamma^{l} \delta_{t+l}=\sum_{l=0}^{\infty} \gamma^{l} r_{t+l}-V\left(s_{t}\right) AtGAE=∑l=0∞γlδt+l=∑l=0∞γlrt+l−V(st) 是看每一步差分得到优势的完全平均值。
利用 GAE 估计优势函数 A π θ n o w ( a t , s t ) A_{\pi_{\theta_{now}}}(a_t,s_t) Aπθnow(at,st) 时,需要计算 π n o w \pi_{now} πnow 交互得到的轨迹每个 timestep的 TD error δ t \delta_t δt,为此需要引入价值网络(critic)来估计 V θ n o w V_{\theta_{now}} Vθnow,得到所有 δ t \delta_t δt 后直接代入 GAE 公式 A t G A E = ∑ l = 0 ∞ ( γ λ ) l δ t + l A_{t}^{G A E}=\sum_{l=0}^{\infty}(\gamma \lambda)^{l} \delta_{t+l} AtGAE=∑l=0∞(γλ)lδt+l 即可
PPO-惩罚
:用拉格朗日乘数法直接将 KL 散度的限制放进了目标函数中,将原问题转换为无约束优化问题。迭代过程中根据真实的 KL 散度值(约束效果)不断更新 KL 散度前的拉格朗日乘数(调节约束强度)。第 k k k 轮优化函数为:PPO-截断
:直接在目标函数中进行限制,以保证新的参数和旧的参数的差距不会太大。第 k k k 轮优化函数为:import gym
import torch
import random
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
from gym.utils.env_checker import check_env
from gym.wrappers import TimeLimit
class PolicyNet(torch.nn.Module):
''' 策略网络是一个两层 MLP '''
def __init__(self, input_dim, hidden_dim, output_dim):
super(PolicyNet, self).__init__()
self.fc1 = torch.nn.Linear(input_dim, hidden_dim)
self.fc2 = torch.nn.Linear(hidden_dim, output_dim)
def forward(self, x):
x = F.relu(self.fc1(x)) # (1, hidden_dim)
x = F.softmax(self.fc2(x), dim=1) # (1, output_dim)
return x
class VNet(torch.nn.Module):
''' 价值网络是一个两层 MLP '''
def __init__(self, input_dim, hidden_dim):
super(VNet, self).__init__()
self.fc1 = torch.nn.Linear(input_dim, hidden_dim)
self.fc2 = torch.nn.Linear(hidden_dim, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
class PPO(torch.nn.Module):
def __init__(self, state_dim, hidden_dim, action_range, actor_lr, critic_lr, lmbda, epochs, eps, gamma, device):
super().__init__()
self.actor = PolicyNet(state_dim, hidden_dim, action_range).to(device)
self.critic = VNet(state_dim, hidden_dim).to(device)
self.actor_optimizer = torch.optim.Adam(self.actor.parameters(), lr=actor_lr)
self.critic_optimizer = torch.optim.Adam(self.critic.parameters(), lr=critic_lr)
self.device = device
self.gamma = gamma
self.lmbda = lmbda # GAE 参数
self.epochs = epochs # 一条轨迹数据用来训练的轮数
self.eps = eps # PPO 中截断范围的参数
self.device = device
def take_action(self, state):
state = torch.tensor(state, dtype=torch.float).to(self.device)
state = state.unsqueeze(0)
probs = self.actor(state)
action_dist = torch.distributions.Categorical(probs)
action = action_dist.sample()
return action.item()
def compute_advantage(self, gamma, lmbda, td_delta):
''' 广义优势估计 GAE '''
td_delta = td_delta.detach().numpy()
advantage_list = []
advantage = 0.0
for delta in td_delta[::-1]:
advantage = gamma * lmbda * advantage + delta
advantage_list.append(advantage)
advantage_list.reverse()
return torch.tensor(np.array(advantage_list), dtype=torch.float)
def update(self, transition_dict):
states = torch.tensor(np.array(transition_dict['states']), dtype=torch.float).to(self.device)
actions = torch.tensor(transition_dict['actions']).view(-1, 1).to(self.device)
rewards = torch.tensor(transition_dict['rewards'], dtype=torch.float).view(-1, 1).to(self.device)
next_states = torch.tensor(np.array(transition_dict['next_states']), dtype=torch.float).to(self.device)
dones = torch.tensor(transition_dict['dones'], dtype=torch.float).view(-1, 1).to(self.device)
td_target = rewards + self.gamma * self.critic(next_states) * (1-dones)
td_delta = td_target - self.critic(states)
advantage = self.compute_advantage(self.gamma, self.lmbda, td_delta.cpu()).to(self.device)
old_log_probs = torch.log(self.actor(states).gather(1, actions)).detach()
# 用刚采集的一条轨迹数据训练 epochs 轮
for _ in range(self.epochs):
log_probs = torch.log(self.actor(states).gather(1, actions))
ratio = torch.exp(log_probs - old_log_probs)
surr1 = ratio * advantage
surr2 = torch.clamp(ratio, 1 - self.eps, 1 + self.eps) * advantage # 截断
actor_loss = torch.mean(-torch.min(surr1, surr2)) # PPO损失函数
critic_loss = torch.mean(F.mse_loss(self.critic(states), td_target.detach()))
# 更新网络参数
self.actor_optimizer.zero_grad()
self.critic_optimizer.zero_grad()
actor_loss.backward()
critic_loss.backward()
self.actor_optimizer.step()
self.critic_optimizer.step()
if __name__ == "__main__":
def moving_average(a, window_size):
''' 生成序列 a 的滑动平均序列 '''
cumulative_sum = np.cumsum(np.insert(a, 0, 0))
middle = (cumulative_sum[window_size:] - cumulative_sum[:-window_size]) / window_size
r = np.arange(1, window_size-1, 2)
begin = np.cumsum(a[:window_size-1])[::2] / r
end = (np.cumsum(a[:-window_size:-1])[::2] / r)[::-1]
return np.concatenate((begin, middle, end))
def set_seed(env, seed=42):
''' 设置随机种子 '''
env.action_space.seed(seed)
env.reset(seed=seed)
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
state_dim = 4 # 环境观测维度
action_range = 2 # 环境动作空间大小
actor_lr = 1e-3
critic_lr = 1e-2
num_episodes = 500
hidden_dim = 128
gamma = 0.98
lmbda = 0.95
epochs = 10
eps = 0.2
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
# build environment
env_name = 'CartPole-v0'
env = gym.make(env_name, render_mode='rgb_array')
check_env(env.unwrapped) # 检查环境是否符合 gym 规范
set_seed(env, 0)
# build agent
agent = PPO(state_dim, hidden_dim, action_range, actor_lr, critic_lr, lmbda, epochs, eps, gamma, device)
# start training
return_list = []
for i in range(10):
with tqdm(total=int(num_episodes / 10), desc='Iteration %d' % i) as pbar:
for i_episode in range(int(num_episodes / 10)):
episode_return = 0
transition_dict = {
'states': [],
'actions': [],
'next_states': [],
'next_actions': [],
'rewards': [],
'dones': []
}
state, _ = env.reset()
# 以当前策略交互得到一条轨迹
while True:
action = agent.take_action(state)
next_state, reward, terminated, truncated, _ = env.step(action)
next_action = agent.take_action(next_state)
transition_dict['states'].append(state)
transition_dict['actions'].append(action)
transition_dict['next_states'].append(next_state)
transition_dict['next_actions'].append(next_action)
transition_dict['rewards'].append(reward)
transition_dict['dones'].append(terminated or truncated)
state = next_state
episode_return += reward
if terminated or truncated:
break
#env.render()
# 用当前策略收集的数据进行 on-policy 更新
agent.update(transition_dict)
# 更新进度条
return_list.append(episode_return)
pbar.set_postfix({
'episode':
'%d' % (num_episodes / 10 * i + i_episode + 1),
'return':
'%.3f' % episode_return,
'ave return':
'%.3f' % np.mean(return_list[-10:])
})
pbar.update(1)
# show policy performence
mv_return_list = moving_average(return_list, 29)
episodes_list = list(range(len(return_list)))
plt.figure(figsize=(12,8))
plt.plot(episodes_list, return_list, label='raw', alpha=0.5)
plt.plot(episodes_list, mv_return_list, label='moving ave')
plt.xlabel('Episodes')
plt.ylabel('Returns')
plt.title(f'{agent._get_name()} on CartPole-V0')
plt.legend()
plt.savefig(f'./result/{agent._get_name()}.png')
plt.show()