File size: 2,709 Bytes
eaa08c2 5dc5d4e eaa08c2 5dc5d4e eaa08c2 5dc5d4e eaa08c2 e9da848 eaa08c2 e9da848 eaa08c2 5dc5d4e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.49 +/- 35.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3) library.
## Installation
Before running the model, follow these steps to set up a **virtual environment**, install dependencies, and run the model.
### 1️⃣ Create and activate a virtual environment
```bash
# Create a virtual environment named "py_env_lunar_lander"
python3 -m venv py_env_lunar_lander
# Activate the virtual environment
# On macOS/Linux:
source py_env_lunar_lander/bin/activate
```
### 2️⃣ Install dependencies
```bash
# Update package list and install system dependencies
sudo apt-get update
sudo apt-get install -y python3-opengl swig
apt install ffmpeg
apt install xvfb
# Install required Python packages
pip install --upgrade pip
pip install pyvirtualdisplay imageio huggingface_sb3 torch
pip install stable-baselines3==2.0.0a5 gymnasium[box2d]
```
## Usage (with Stable-baselines3)
After installing dependencies, run the following script to load the trained model, run an episode, and generate a video.
```python
import gymnasium as gym
import torch
import imageio
from stable_baselines3 import PPO
from huggingface_sb3 import load_from_hub
from IPython.display import Video
# Define the repo ID and model filename
repo_id = "aiamnoone/ppo-LunarLander-v2" # Your Hugging Face repo ID
filename = "ppo-LunarLander-v2.zip" # Model file name
# Download and load the model from Hugging Face Hub
model_path = load_from_hub(repo_id, filename)
model = PPO.load(model_path)
# Create the environment
env = gym.make("LunarLander-v2", render_mode="rgb_array")
obs = env.reset()[0] # Get only the observation
done = False
frames = []
# Run one episode and collect frames
while not done:
action, _ = model.predict(obs, deterministic=True) # Use trained model
obs, reward, done, truncated, info = env.step(action)
frame = env.render() # Get the frame (RGB image)
frames.append(frame) # Save the frame
env.close()
# Save the frames as a video
video_path = "lunar_lander.mp4"
imageio.mimsave(video_path, frames, fps=30) # Adjust FPS if needed
# Display the video
print("Video saved as 'lunar_lander.mp4'. Open it to watch!")
...
``` |