duandaxia nielsr HF Staff commited on
Commit
f8e8659
·
verified ·
1 Parent(s): 620384c

Add model card for RTI-DP (#1)

Browse files

- Add model card for RTI-DP (90a51979c32b4680a0c7483941896e74ffbc386c)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: robotics
3
+ library_name: diffusers
4
+ license: mit
5
+ ---
6
+
7
+ # Real-Time Iteration Scheme for Diffusion Policy (RTI-DP)
8
+
9
+ This repository contains the official model weights and code for the paper: **"Real-Time Iteration Scheme for Diffusion Policy"**.
10
+
11
+ - 📚 [Paper](https://huggingface.co/papers/2508.05396)
12
+ - 🌐 [Project Page](https://rti-dp.github.io/)
13
+ - 💻 [Code](https://github.com/RTI-DP/rti-dp)
14
+
15
+ Diffusion Policies have demonstrated impressive performance in robotic manipulation tasks. However, their long inference time, stemming from extensive iterative denoising, limits their applicability to latency-critical tasks. Inspired by the Real-Time Iteration (RTI) Scheme from optimal control, RTI-DP introduces a novel approach to significantly reduce inference time without the need for additional training or policy redesign. This scheme accelerates optimization by leveraging solutions from previous time steps as initial guesses, enabling seamless integration into many pre-trained diffusion-based models and making them suitable for real-time robotic applications with comparable performance.
16
+
17
+ <div align="center">
18
+ <img src="https://github.com/RTI-DP/rti-dp/raw/main/media/rti-dp.gif" alt="RTI-DP Teaser" width="600"/>
19
+ </div>
20
+
21
+ ## Usage
22
+
23
+ This model is designed to be used with its official codebase. For detailed installation instructions, environment setup, and further information, please refer to the [official GitHub repository](https://github.com/RTI-DP/rti-dp), which is based on [Diffusion Policy](https://github.com/real-stanford/diffusion_policy).
24
+
25
+ ### Evaluation
26
+
27
+ To evaluate RTI-DP policies with DDPM, you can use the provided script from the repository:
28
+
29
+ ```shell
30
+ python ../eval_rti.py --config-name=eval_diffusion_rti_lowdim_workspace.yaml
31
+ ```
32
+
33
+ For RTI-DP-scale checkpoints, refer to the [duandaxia/rti-dp-scale](https://huggingface.co/duandaxia/rti-dp-scale) on Hugging Face.
34
+
35
+ ## Citation
36
+
37
+ If you find our work useful, please consider citing our paper:
38
+
39
+ ```bibtex
40
+ @misc{duan2025rtidp,
41
+ title={Real-Time Iteration Scheme for Diffusion Policy},
42
+ author={Yufei Duan and Hang Yin and Danica Kragic},
43
+ year={2025},
44
+ }
45
+ ```
46
+
47
+ ## Acknowledgements
48
+
49
+ We thank the authors of [Diffusion Policy](https://github.com/real-stanford/diffusion_policy), [Consistency Policy](https://github.com/Aaditya-Prasad/Consistency-Policy/) and [Streaming Diffusion Policy](https://github.com/Streaming-Diffusion-Policy/streaming_diffusion_policy/) for sharing their codebase.