PPO-LunarLander / README.md

Commit History

Uploaded PPO model trained for 1,000,000 steps in 16 envs
b5df252

daniel-gordon commited on