metadata
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.49 +/- 35.78
name: mean_reward
verified: false
PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2 using the Stable-Baselines3 library.
Installation
Before running the model, follow these steps to set up a virtual environment, install dependencies, and run the model.
1️⃣ Create and activate a virtual environment
# Create a virtual environment named "py_env_lunar_lander"
python3 -m venv py_env_lunar_lander
# Activate the virtual environment
# On macOS/Linux:
source py_env_lunar_lander/bin/activate
2️⃣ Install dependencies
# Update package list and install system dependencies
sudo apt-get update
sudo apt-get install -y python3-opengl swig
apt install ffmpeg
apt install xvfb
# Install required Python packages
pip install --upgrade pip
pip install pyvirtualdisplay imageio huggingface_sb3 torch
pip install stable-baselines3==2.0.0a5 gymnasium[box2d]
Usage (with Stable-baselines3)
After installing dependencies, run the following script to load the trained model, run an episode, and generate a video.
import gymnasium as gym
import torch
import imageio
from stable_baselines3 import PPO
from huggingface_sb3 import load_from_hub
from IPython.display import Video
# Define the repo ID and model filename
repo_id = "aiamnoone/ppo-LunarLander-v2" # Your Hugging Face repo ID
filename = "ppo-LunarLander-v2.zip" # Model file name
# Download and load the model from Hugging Face Hub
model_path = load_from_hub(repo_id, filename)
model = PPO.load(model_path)
# Create the environment
env = gym.make("LunarLander-v2", render_mode="rgb_array")
obs = env.reset()[0] # Get only the observation
done = False
frames = []
# Run one episode and collect frames
while not done:
action, _ = model.predict(obs, deterministic=True) # Use trained model
obs, reward, done, truncated, info = env.step(action)
frame = env.render() # Get the frame (RGB image)
frames.append(frame) # Save the frame
env.close()
# Save the frames as a video
video_path = "lunar_lander.mp4"
imageio.mimsave(video_path, frames, fps=30) # Adjust FPS if needed
# Display the video
print("Video saved as 'lunar_lander.mp4'. Open it to watch!")
...