[Chapter 5] Reinforcement Learning (3) Function Approximation and Going Deep

Function Approximation

While we are learning the Q-functions, but how to represent or record the Q-values? For discrete and finite state space and action space, we can use a big table with size of to represent the Q-values for all pairs. However, if the state space or action space is very huge, or actually, usually they are continuous and infinite, a tabular method doesn't work anymore.

We need function approximation to represent utility and Q-functions with some parameters to be learnt. Also take the grid environment as our example, we can represent the state using a pair of coordiantes , then one simple function approximation can be like this:

Of course, you can design more complex functions when you have a much larger state space.

In this case, our reinforcement learning agent turns to learn the parameters to approximate the evaluation functions ( or ).

For Monte Carlo learning, we can collect a set of training samples (trails) with input and label, then this turns to be a supervised learning problem. With squared error and linear function, we get a standard linear regression problem.

For Temporal Difference learning, the agent aims to adjust the parameters to reduce the temporal difference (to reduce the TD error. To update the parameters using gradient decrease method:

  • For SARSA (on-policy method):

  • For Q-learning (off-policy method):

Going Deep

One of the greatest advancement in reinforcement learning is to combine it with deep learning. As we have stated above, mostly, we cannot use a tabular method to represent the evaluation functions, we need approximation! I know what you want to say: you must have thought that deep network is a good function approximation. We have input for a network and output the Q-values or utilities, that's it! So, using deep network in RL is deep reinforcement learning (DRL).

Why we need deep network?

  • Firstly, for the environment that has nearly infinite state space, the deep network can hold a large set of parameters to be learnt and can map a large set of states to their expected Q-values.
  • Secondly, for some environment with complex observation, only deep network can solve them. For example, if the observation is an RGB image, we need convolutional neural network (CNN) in the first layers to read them; if the observation is a piece of audio, we need recurrent neural network (RNN) in the first layers.
  • Nowadays, designing and training a deep neural network can be done much easier based on the advanced hardware and software technology.

One of the DRL algorithms is Deep Q-learning Network (DQN), we have the pseudo code here but will not go into it:

image

你可能感兴趣的:([Chapter 5] Reinforcement Learning (3) Function Approximation and Going Deep)