RL:prat1:key_concepts_in_RL强化学习

强化学习概念

RL:prat1:key_concepts_in_RL强化学习_第1张图片

States and Observations

  • state是对世界状态的完全描述,
  • observation是对state的部分描述

Action Spaces

  • 离散,如一些游戏
  • 连续,如机器人的运动速度和角度

Policies

  • agent决定如何action
  • 可以是确定的:a(t)=\mu(s_t)
  • 可以是随机的,由概率分布\pi决定:a_{t} \sim \pi(\cdot |s_t)

Trajectories

  • \tau =(s_0 , a_0,s_1,a_1,...)
  • s_0是初始的状态分布s_0 ~ \rho_0(\cdot)
  • 可以是确定的:s_{t+1} =f(s_t,a_t)
  • 可以是随机的:s_{t+1} ~ P(\cdot|s_t,a_t)

Reward and Return

  • 当前步奖励Reward:
  • 累计奖励

The RL Problem

  • select a policy which maximizes expected return when the agent acts according to it.

  • 首先计算给定策略\pi特定\tau发生的概率:P(\tau | \pi)=\rho_0(s_0)\prod_{t=0}^{T-1}P(s_{t+1}| s_{t} ,a_{t}}) \pi(a_t|s_t)
  • 策略\pi期望return:J(\pi)=\int\limits_{\tau}P(\tau|\pi)R(\tau)=\mathop{E}\limits_{\tau\sim\pi}[R(\tau)]
  • RL优化问题可以表示为:\pi^* =arg \mathop{max}\limits_{\pi}J(\pi),最优策略;

Value Functions

  • value是知以指定状态,或状态-action为起始的期望回报 expected return ;
  • The On-Policy Value Function:
  • The On-Policy Action-Value Function:
  • The Optimal Value Function, :
  • The Optimal Action-Value Function:
  • 关系:\mathop{E}\limits_{a\sim\pi}[Q^{\pi}(s,a)]=\int_{a}\pi(a_0=a|s_0=s)Q^{\pi}(s,a)

Bellman Equations

你可能感兴趣的:(RL,RL)