The main goal of Reinforcement learning is to find the optimal policy that will maximize the expected cumulative reward. Because Reinforcement Learning is based on the reward hypothesis: all goals can be described as the maximization of the expected cumulative reward.
For instance, in a soccer game (where you’re going to train the agents in two units), the goal is to win the game. We can describe this goal in reinforcement learning as maximizing the number of goals scored (when the ball crosses the goal line) into your opponent’s soccer goals. And minimizing the number of goals in your soccer goals.
In the first unit, we saw two methods to find (or, most of the time, approximate) this optimal policy .
In value-based methods, we learn a value function.
On the other hand, in policy-based methods, we directly learn to approximate without having to learn a value function.
Consequently, thanks to policy-based methods, we can directly optimize our policy to output a probability distribution over actions that leads to the best cumulative return. To do that, we define an objective function , that is, the expected cumulative reward, and we want to find the value that maximizes this objective function.
Policy-gradient methods, what we’re going to study in this unit, is a subclass of policy-based methods. In policy-based methods, the optimization is most of the time on-policy since for each update, we only use data (trajectories) collected by our most recent version of .
The difference between these two methods lies on how we optimize the parameter :
Before diving more into how policy-gradient methods work (the objective function, policy gradient theorem, gradient ascent, etc.), let’s study the advantages and disadvantages of policy-based methods.