Datasets:
File size: 5,136 Bytes
31f565e 809cf3f 31f565e 809cf3f 31f565e 809cf3f 31f565e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
---
license: mit
task_categories:
- robotics
---
<div align="center">
<div style="margin-bottom: 30px">
<div style="display: flex; flex-direction: column; align-items: center; gap: 8px">
<h1 align="center" style="margin: 0; line-height: 1;">
<span style="font-size: 48px; font-weight: 600;">PSEC</span>
</h1>
</div>
<h2 style="font-size: 32px; margin: 20px 0;">Skill Expansion and Composition in Parameter Space</h2>
<h4 style="color: #666; margin-bottom: 25px;">International Conference on Learning Representation (ICLR), 2025</h4>
<p align="center" style="margin: 20px 0;">
<a href="https://huggingface.co/papers/2502.05932">
<img src="https://img.shields.io/badge/arXiv-2502.05932-b31b1b.svg">
</a>
<!-- -->
<a href="https://ltlhuuu.github.io/PSEC/">
<img src="https://img.shields.io/badge/๐_Project_Page-PSEC-blue.svg">
</a>
<!-- -->
<a href="https://arxiv.org/pdf/2502.05932.pdf">
<img src="https://img.shields.io/badge/๐_Paper-PSEC-green.svg">
</a>
</p>
</div>
</div>
<div align="center">
<p style="font-size: 20px; font-weight: 600; margin-bottom: 20px;">
๐ฅ Official Implementation
</p>
<p style="font-size: 18px; max-width: 800px; margin: 0 auto;">
<b>PSEC</b> is a novel framework designed to:
</p>
</div>
<div align="center">
<p style="font-size: 15px; font-weight: 600; margin-bottom: 20px;">
๐ <b>Facilitate</b> efficient and flexible skill expansion and composition <br>
๐ <b>Iteratively evolve</b> the agents' capabilities<br>
โก <b>Efficiently address</b> new challenges
</p>
</div>
<p align="center">
<img src="assets/intro.png" width="800" style="margin: 40px 0;">
</p>
<!-- <div align="center">
<a href="https://github.com/ltlhuuu/PSEC/stargazers">
<img src="https://img.shields.io/github/stars/ltlhuuu/PSEC?style=social" alt="GitHub stars">
</a>
<a href="https://github.com/ltlhuuu/PSEC/network/members">
<img src="https://img.shields.io/github/forks/ltlhuuu/PSEC?style=social" alt="GitHub forks">
</a>
<a href="https://github.com/ltlhuuu/PSEC/issues">
<img src="https://img.shields.io/github/issues/ltlhuuu/PSEC?style=social" alt="GitHub issues">
</a>
</div> -->
## Quick start
Clone this repository and navigate to PSEC folder
```python
git clone https://github.com/ltlhuuu/PSEC.git
cd PSEC
```
## Environment Installation
Environment configuration and dependencies are available in environment.yaml and requirements.txt.
Create conda environment for this experiments
```python
conda create -n PSEC python=3.9
conda activate PSEC
```
Then install the remaining requirements (with MuJoCo already downloaded, if not see [here](#MuJoCo-installation)):
```bash
pip install -r requirements.txt
```
Install the `MetaDrive` environment via
```python
pip install git+https://github.com/HenryLHH/metadrive_clean.git@main
```
### MuJoCo installation
Download MuJoCo:
```bash
mkdir ~/.mujoco
cd ~/.mujoco
wget https://github.com/google-deepmind/mujoco/releases/download/2.1.0/mujoco210-linux-x86_64.tar.gz
tar -zxvf mujoco210-linux-x86_64.tar.gz
cd mujoco210
wget https://www.roboti.us/file/mjkey.txt
```
Then add the following line to `.bashrc`:
```
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mujoco210/bin
```
## Run experiments
### Pretrain
Pretrain the model with the following command. Meanwhile there are pre-trained models, you can download them from [here](https://drive.google.com/drive/folders/1lpcShmYoKVt4YMH66JBiA0MhYEV9aEYy?usp=sharing).
```python
export XLA_PYTHON_CLIENT_PREALLOCATE=False
CUDA_VISIBLE_DEVICES=0 python launcher/examples/train_pretrain.py --variant 0 --seed 0
```
### LoRA finetune
Train the skill policies with LoRA to achieve skill expansion. Meanwhile there are pre-trained models, you can download them from [here](https://drive.google.com/drive/folders/1lpcShmYoKVt4YMH66JBiA0MhYEV9aEYy?usp=sharing).
```python
CUDA_VISIBLE_DEVICES=0 python launcher/examples/train_lora_finetune.py --com_method 0 --model_cls 'LoRALearner' --variant 0 --seed 0
```
### Context-aware Composition
Train the context-aware modular to adaptively leverage different skill knowledge to solve the tasks. You can download the pretrained model and datasets from [here](https://drive.google.com/drive/folders/1lpcShmYoKVt4YMH66JBiA0MhYEV9aEYy?usp=sharing). Then, run the following command,
```python
CUDA_VISIBLE_DEVICES=0 python launcher/examples/train_lora_finetune.py --com_method 0 --model_cls 'LoRASLearner' --variant 0 --seed 0
```
## Citations
If you find our paper and code useful for your research, please cite:
```
@inproceedings{
liu2025psec,
title={Skill Expansion and Composition in Parameter Space},
author={Tenglong Liu, Jianxiong Li, Yinan Zheng, Haoyi Niu, Yixing Lan, Xin Xu, Xianyuan Zhan},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=GLWf2fq0bX}
}
```
## Acknowledgements
Parts of this code are adapted from [IDQL](https://github.com/philippe-eecs/IDQL). |