To better understand Q-Learning, let’s take a simple example:
The reward function goes like this:
To train our agent to have an optimal policy (so a policy that goes right, right, down), we will use the Q-Learning algorithm.
So, for now, our Q-table is useless; we need to train our Q-function using the Q-Learning algorithm.
Let’s do it for 2 training timesteps:
Training timestep 1:
Because epsilon is big (= 1.0), I take a random action. In this case, I go right.
By going right, I get a small cheese, so and I’m in a new state.
We can now update using our formula.
Training timestep 2:
I take a random action again, since epsilon=0.99 is big. (Notice we decay epsilon a little bit because, as the training progress, we want less and less exploration).
I took the action ‘down’. This is not a good action since it leads me to the poison.
Because I ate poison, I get, and I die.
Because we’re dead, we start a new episode. But what we see here is that, with two explorations steps, my agent became smarter.
As we continue exploring and exploiting the environment and updating Q-values using the TD target, the Q-table will give us a better and better approximation. At the end of the training, we’ll get an estimate of the optimal Q-function.