Introduction

Unit 8

In Unit 6, we learned about Advantage Actor Critic (A2C), a hybrid architecture combining value-based and policy-based methods that helps to stabilize the training by reducing the variance with:

Today we’ll learn about Proximal Policy Optimization (PPO), an architecture that improves our agent’s training stability by avoiding policy updates that are too large. To do that, we use a ratio that indicates the difference between our current and old policy and clip this ratio to a specific range[1ϵ,1+ϵ] [1 - \epsilon, 1 + \epsilon] .

Doing this will ensure that our policy update will not be too large and that the training is more stable.

This Unit is in two parts:

Environment
These are the environments you're going to use to train your agents: VizDoom environments

Sound exciting? Let’s get started! 🚀