上一篇博客《浅析Gym中的env》中我们简要介绍了倒立摆(Pendulum)环境,本文中,我们主要介绍小车上山环境,参考Github上的continuous_mountain_car.py源码。
与分析Pendulum环境一样,我们仍然先来看看state、observation、action。
虽然python文件叫做continuous_mountain_car.py,但是在我们使用gym.make时,传入的参数是“MountainCarContinuous-v0”,所以我们暂且将这个环境简写为MCC。
关于MCC环境,我们直接看_step()函数:
def _step(self, action):
position = self.state[0]
velocity = self.state[1]
force = min(max(action[0], -1.0), 1.0)
velocity += force*self.power -0.0025 * math.cos(3*position)
if (velocity > self.max_speed): velocity = self.max_speed
if (velocity < -self.max_speed): velocity = -self.max_speed
position += velocity
if (position > self.max_position): position = self.max_position
if (position < self.min_position): position = self.min_position
if (position==self.min_position and velocity<0): velocity = 0
done = bool(position >= self.goal_position)
reward = 0
if done:
reward = 100.0
reward-= math.pow(action[0],2)*0.1
self.state = np.array([position, velocity])
return self.state, reward, done, {}
从最后的self.state=np.array([position,velocity])中可以看出,MCC的状态包括位置(position)和速度(velocity),此外,在__init__()函数中,有如下语句:
self.action_space = spaces.Box(self.min_action, self.max_action, shape = (1,))
self.observation_space = spaces.Box(self.low_state, self.high_state)
可见,observation space与state space一致,也包括位置和速度。而action space则是一维的,前进或者倒车。
observation space和action space都有上下限,这些在__init__()函数中有声明:
self.min_action = -1.0
self.max_action = 1.0
self.min_position = -1.2
self.max_position = 0.6
self.max_speed = 0.07
self.goal_position = 0.45 # was 0.5 in gym, 0.45 in Arnaud de Broissia's version
self.power = 0.0015
self.low_state = np.array([self.min_position, -self.max_speed])
self.high_state = np.array([self.max_position, self.max_speed])
self.action_space = spaces.Box(self.min_action, self.max_action, shape = (1,))
self.observation_space = spaces.Box(self.low_state, self.high_state)
这也不难理解,我们的问题是要让小车上到右手边的山峰,所以goal_position为0.45,这是相对于初始位置(最低点)而言的,最低点position为0,向左为负,向右为正。
done = bool(position >= self.goal_position)
reward = 0
if done:
reward = 100.0
reward-= math.pow(action[0],2)*0.1
这是_step()函数中的语句片段,也就是说,每执行一个step,就会检查看自己是否越过了右边的山峰,据此来给done赋值,如果没有越过山峰,则在这一个step, reward将会记为-math.pow(action[0],2)*0.1,也就是这一个时间步我们耗费了多少能量,我们当然不希望耗油太多。一旦小车越过右边的山峰,也即有done=True,这一个时间步就会马上得到100-math.pow(action[0],2)*0.1的奖励。
与Pendulum环境一样,我们这里同样也有max_episode_steps的限制,Pendulum是200,而这里是999,此外,MCC还多了一项对于奖励的限制,reward_threshold = 90.0。(这些都是定义在gym/envs/__init__.py中)
MCC环境分析就到这里咯~