JobShopCPRL / README.md
pierretassel's picture
Update README.md
995d5aa

A newer version of the Gradio SDK is available: 5.20.1

Upgrade
metadata
license: mit
title: >-
  An End-to-End Reinforcement Learning Approach for Job-Shop Scheduling Problems
  Based on Constraint Programming
sdk: gradio
emoji: πŸš€

An End-to-End Reinforcement Learning Approach for Job-Shop Scheduling Problems Based on Constraint Programming

Checkout the paper on arXiv: https://arxiv.org/abs/2306.05747

A Job-Shop Scheduling Reinforcement Learning based solver using an underlying CP model as an environment. For fast inference, check out the cached examples below. Any Job-Shop Scheduling instance following the standard specification is compatible. Check out this website for more instances. Increasing the number of workers will provide better solutions but will slow down the solving time. This behavior is different from the one from the paper repository as agents run sequentially, whereas we run agents in parallel (technical limitation due to the platform here).

We recommend running the approach locally outside the interface for large instances, as it causes a lot of overhead, and the resource available on this platform is low (1 vCPU and no GPU).

Please checkout the Github repo for more information: https://github.com/ingambe/End2End-Job-Shop-Scheduling-CP This repository contains the source code for the paper "An End-to-End Reinforcement Learning Approach for Job-Shop Scheduling Problems Based on Constraint Programming". This works propose an approach to design a Reinforcement Learning (RL) environment using Constraint Programming (CP) and a training algorithm that does not rely on any custom reward or observation for the job-shop scheduling (JSS) problem.

Installation

To use the code, first clone the repository:

git clone https://github.com/ingambe/End2End-Job-Shop-Scheduling-CP.git

It is recommended to create a new virtual environment (optional) and install the required dependencies using:

pip install -r requirements.txt

Training the Reinforcement Learning Agent

The main.py script allows training the agent from scratch:

python main.py

You can train your agent on different instances by replacing the files in the instances_train/ folder.

The pre-trained checkpoint of the neural network is saved in the checkpoint.pt file.

Solving benchmark instances

The fast_solve.py script solves the job-shop scheduling instances stored in the instances_run/ folder and outputs the results in a results.csv file. For better performance, it is recommended to run the script with the -O argument:

python -O fast_solve.py

To obtain the solutions using the dispatching heuristics (FIFO, MTWR, etc.), you can execute the script static_dispatching/benchmark_static_dispatching.py

Looking for the environment only?

The environment only can be installed as a standalone package using

pip install jss_cp

For extra performance, the code is compiled using MyPyC Checkout the environment repository: https://github.com/ingambe/JobShopCPEnv

Citation

If you use this environment in your research, please cite the following paper:

@article{Tassel_Gebser_Schekotihin_2023,
  title={An End-to-End Reinforcement Learning Approach for Job-Shop Scheduling Problems Based on Constraint Programming},
  volume={33},
  url={https://ojs.aaai.org/index.php/ICAPS/article/view/27243},
  DOI={10.1609/icaps.v33i1.27243},
  number={1},
  journal={Proceedings of the International Conference on Automated Planning and Scheduling},
  author={Tassel, Pierre and Gebser, Martin and Schekotihin, Konstantin},
  year={2023},
  month={Jul.},
  pages={614-622}
}

License

MIT License