alphago 基础之DQN
Q learning:
1 主要用在解是离散时
2 主要是利用值函数,即,直接由值函数来推策略
3 其核心在于bellman方程和代价函数
bellman的核心在于使用reward的时候要考虑到将来的情况,而不是只考虑现在的情况,否则的话,只考虑到当前的reward就和人只顾当下,不考虑未来,是走不长远的,在游戏中就意味着,你很快将死掉,不论是Qlearning还是policy gradient,都使用到了bellman方程
而代价函数,代表你的目标,即你要做什么
在代码下面的代码中
体现bellman方程的是是代码第93行:
y_batch.append(reward_batch[i] + GAMMA * np.max(Q_value_batch[i]))
这里的Q_value_batch[i]是下一个状态下,能够获得的最大Q值,也就是对应的概率最大的概率值,将这个值作为下一步的reward
我觉得是不是可以在继续计算下下步,直到结束时的reward,然后全部按照权重进行累加,还是因为无法估计下下一步的state,从而无法再计算下下步的Q值?
代价函数:
在上面代码的第49行的函数给出了代价函数:
def create_training_method(self):
self.action_input = tf.placeholder("float",[None,self.action_dim]) # one hot presentation
self.y_input = tf.placeholder("float",[None])
Q_action = tf.reduce_sum(tf.multiply(self.Q_value,self.action_input),reduction_indices = 1)
self.cost = tf.reduce_mean(tf.square(self.y_input - Q_action))
self.optimizer = tf.train.AdamOptimizer(0.0001).minimize(self.cost)
在代价函数中self.y_input是输入的reward,包含未来的reward信息
Q_action是Q值和当前输入的action的乘积
代价函数的核心,是将self.y_input和Q_action之间的差值最小化
当输入self.y_input较大的时,Q_action需要也变的较大,而
Q_action = tf.reduce_sum(tf.multiply(self.Q_value,self.action_input),reduction_indices = 1)
即与Q值和输入的self.action_input相关,而其中self.action_input无法进行优化,只有Q值进行优化,那么从理论上分析就是:
当self.y_input较大,即reward较大,那么说明这个action较好,那么需要更多的调整Q值,使得Q值的输出向着这个action靠近,而如果这个self.y_input值较小,说明这个reward较小,那么需要较小的调整向这个action靠近.
由于值函数的作用就是评估当前状态下的最大reward(或者说评估当前状态下的价值),那么Q learning相当于就是直接由值函数来推导策略,或者可以说直接将值函数和策略函数整合在一起,如果将值函数和策略函数分开,那么典型的就是策略梯度算法,策略梯度使用值函数对当前状态进行估计,使用实际的reward与估计的reward之间的差值作为策略函数(针对当前action)梯度更新的方向和大小,可以看看这篇文章(http://blog.csdn.net/liyuan123zhouhui/article/details/78656231)
贴出代码及核心注释:
150行代码实现DQN算法玩CartPole(https://zhuanlan.zhihu.com/p/21477488):
import gym
import tensorflow as tf
import numpy as np
import random
from collections import deque
# Hyper Parameters for DQN
GAMMA = 0.9 # discount factor for target Q
INITIAL_EPSILON = 0.5 # starting value of epsilon
FINAL_EPSILON = 0.01 # final value of epsilon
REPLAY_SIZE = 10000 # experience replay buffer size
BATCH_SIZE = 32 # size of minibatch
#agent needs to train this DQN network
class DQN():
# DQN Agent
def __init__(self, env):
# init experience replay
self.replay_buffer = deque()
# init some parameters
self.time_step = 0
self.epsilon = INITIAL_EPSILON
self.state_dim = env.observation_space.shape[0]
self.action_dim = env.action_space.n
#create q network and training method when init
self.create_Q_network()
self.create_training_method()
# Init session
self.session = tf.InteractiveSession()
#self.session.run(tf.initialize_all_variables())
self.session.run(tf.global_variables_initializer())
def create_Q_network(self):
# network weights
W1 = self.weight_variable([self.state_dim,20])
b1 = self.bias_variable([20])
W2 = self.weight_variable([20,self.action_dim])
b2 = self.bias_variable([self.action_dim])
# input layer
self.state_input = tf.placeholder("float",[None,self.state_dim])
# hidden layers
h_layer = tf.nn.relu(tf.matmul(self.state_input,W1) + b1)
# Q Value layer
self.Q_value = tf.matmul(h_layer,W2) + b2
def create_training_method(self):
self.action_input = tf.placeholder("float",[None,self.action_dim]) # one hot presentation
self.y_input = tf.placeholder("float",[None])
Q_action = tf.reduce_sum(tf.multiply(self.Q_value,self.action_input),reduction_indices = 1)
#当self.y_input较大,即reward较大,那么说明这个action较好,那么需要更多的调整Q值,使得Q值的输出向着这个action靠近,而如果这个self.y_input值较小,说明这个reward较小,那么需要较小的调整向这个action靠近
self.cost = tf.reduce_mean(tf.square(self.y_input - Q_action))
self.optimizer = tf.train.AdamOptimizer(0.0001).minimize(self.cost)
#perceive method could train q network
def perceive(self,state,action,reward,next_state,done):
one_hot_action = np.zeros(self.action_dim)
one_hot_action[action] = 1
self.replay_buffer.append((state,one_hot_action,reward,next_state,done))
if len(self.replay_buffer) > REPLAY_SIZE:
self.replay_buffer.popleft()
if len(self.replay_buffer) > BATCH_SIZE:
self.train_Q_network()
def train_Q_network(self):
self.time_step += 1
# Step 1: obtain random minibatch from replay memory
minibatch = random.sample(self.replay_buffer,BATCH_SIZE)
state_batch = [data[0] for data in minibatch]
action_batch = [data[1] for data in minibatch]
reward_batch = [data[2] for data in minibatch]
next_state_batch = [data[3] for data in minibatch]
# Step 2: calculate y
y_batch = []
Q_value_batch = self.Q_value.eval(feed_dict={self.state_input:next_state_batch})
for i in range(0,BATCH_SIZE):
done = minibatch[i][4]
if done:
y_batch.append(reward_batch[i])
else :
#需要加上未来的reward,bellman方程?
#要考虑到将来的情况,而不是只考虑现在的情况,否则的话,只考虑到当前的reward就和人只顾当下,不考虑未来,是走不长远的,在游戏中就意味着,你很快将死掉
y_batch.append(reward_batch[i] + GAMMA * np.max(Q_value_batch[i]))
self.optimizer.run(feed_dict={
self.y_input:y_batch,
self.action_input:action_batch,
self.state_input:state_batch
})
#agent action when training,action with noise
def egreedy_action(self,state):
Q_value = self.Q_value.eval(feed_dict = {
self.state_input:[state]
})[0]
self.epsilon -= (INITIAL_EPSILON - FINAL_EPSILON)/10000
if random.random() <= self.epsilon:
return random.randint(0,self.action_dim - 1)
else:
return np.argmax(Q_value)
#agent action without noise
def action(self,state):
return np.argmax(self.Q_value.eval(feed_dict = {
self.state_input:[state]
})[0])
def weight_variable(self,shape):
initial = tf.truncated_normal(shape)
return tf.Variable(initial)
def bias_variable(self,shape):
initial = tf.constant(0.01, shape = shape)
return tf.Variable(initial)
# ---------------------------------------------------------
# Hyper Parameters
ENV_NAME = 'CartPole-v0'
EPISODE = 10000 # Episode limitation
STEP = 300 # Step limitation in an episode
TEST = 10 # The number of experiment test every 100 episode
def main():
# initialize OpenAI Gym env and dqn agent
#1 init envirment
env = gym.make(ENV_NAME)
#2 init agent
agent = DQN(env)
for episode in range(EPISODE):
# initialize task
state = env.reset()
# Train
for step in range(STEP):
action = agent.egreedy_action(state) # e-greedy action for train,action with noise
#print("action:",action)
next_state,reward,done,_ = env.step(action)#envirment feedback with action,feedback:state,reward,
if reward ==None:
print("action:",action)
print("reward:",reward)
print("state:",state)
print("next_state:",next_state)
# Define reward for agent
reward_agent = -1 if done else 0.1
#agent training with input:state,action,output:reward,next_state
agent.perceive(state,action,reward_agent,next_state,done)
state = next_state
if done:
break
# Test every 100 episodes
if episode % 100 == 0:
total_reward = 0
for i in range(TEST):
state = env.reset()
for j in range(STEP):
env.render()
action = agent.action(state) # direct action for test
state,reward,done,_ = env.step(action)
#print("TEST reward:",reward)
total_reward += reward
if done:
break
ave_reward = total_reward/TEST
print ('episode: ',episode,'Evaluation Average Reward:',ave_reward)
#if ave_reward >= 200:
# break
if __name__ == '__main__':
main()