Push model using huggingface_hub.
Browse files- README.md +9 -83
- config.json +12 -0
- model.safetensors +3 -0
README.md
CHANGED
@@ -1,83 +1,9 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
-
|
10 |
-
|
11 |
-
- **Version**: 1.0
|
12 |
-
|
13 |
-
- **Release Date**: August 2025
|
14 |
-
|
15 |
-
- **Developers**: Zijian Zhao, Dian Jin
|
16 |
-
|
17 |
-
- **Organization**: HKUST, PolyU
|
18 |
-
|
19 |
-
- **License**: Apache License 2.0
|
20 |
-
|
21 |
-
- **Paper**: [Automatic Stage Lighting Control: Is it a Rule-Driven Process or Generative Task?](https://arxiv.org/abs/2506.01482)
|
22 |
-
|
23 |
-
- **Citation:**
|
24 |
-
|
25 |
-
```
|
26 |
-
@article{zhao2025automatic,
|
27 |
-
title={Automatic Stage Lighting Control: Is it a Rule-Driven Process or Generative Task?},
|
28 |
-
author={Zhao, Zijian and Jin, Dian and Zhou, Zijing and Zhang, Xiaoyu},
|
29 |
-
journal={arXiv preprint arXiv:2506.01482},
|
30 |
-
year={2025}
|
31 |
-
}
|
32 |
-
```
|
33 |
-
|
34 |
-
- **Contact**: [email protected]
|
35 |
-
|
36 |
-
- **Repository**: https://github.com/RS2002/Skip-BART
|
37 |
-
|
38 |
-
## Model Description
|
39 |
-
|
40 |
-
Skip-BART is a transformer-based model built on the Bidirectional and Auto-Regressive Transformers (BART) architecture, designed for automatic stage lighting control. It generates lighting sequences synchronized with music input, treating stage lighting as a generative task. The model processes music data in an octuple format and outputs lighting control parameters, leveraging a skip-connection-enhanced BART structure for improved performance.
|
41 |
-
|
42 |
-
- **Architecture**: BART with skip connections
|
43 |
-
- **Input Format**: Encoder input (batch_size, length, 512), decoder input (batch_size, length, 2), attention masks (batch_size, length)
|
44 |
-
- **Output Format**: Hidden states of dimension [batch_size, length, 1024]
|
45 |
-
- **Hidden Size**: 1024
|
46 |
-
- **Training Objective**: Pre-training on music data, followed by fine-tuning for lighting sequence generation
|
47 |
-
- **Tasks Supported**: Stage lighting sequence generation
|
48 |
-
|
49 |
-
## Training Data
|
50 |
-
|
51 |
-
The model was trained on the **RPMC-L2** dataset:
|
52 |
-
|
53 |
-
- **Dataset Source**: [RPMC-L2](https://zenodo.org/records/14854217?token=eyJhbGciOiJIUzUxMiJ9.eyJpZCI6IjM5MDcwY2E5LTY0MzUtNGZhZC04NzA4LTczMjNhNTZiOGZmYSIsImRhdGEiOnt9LCJyYW5kb20iOiI1YWRkZmNiMmYyOGNiYzI4ZWUxY2QwNTAyY2YxNTY4ZiJ9.0Jr6GYfyyn02F96eVpkjOtcE-MM1wt-_ctOshdNGMUyUKI15-9Rfp9VF30_hYOTqv_9lLj-7Wj0qGyR3p9cA5w)
|
54 |
-
- **Description**: Contains music and corresponding stage lighting data in a format suitable for training Skip-BART.
|
55 |
-
- **Details**: Refer to the [paper](https://arxiv.org/abs/2506.01482) for dataset specifics.
|
56 |
-
|
57 |
-
## Usage
|
58 |
-
|
59 |
-
### Installation
|
60 |
-
|
61 |
-
```shell
|
62 |
-
git clone https://huggingface.co/RS2002/Skip-BART
|
63 |
-
```
|
64 |
-
|
65 |
-
### Example Code
|
66 |
-
|
67 |
-
```python
|
68 |
-
import torch
|
69 |
-
from model import Skip_BART
|
70 |
-
|
71 |
-
# Load the model
|
72 |
-
model = Skip_BART.from_pretrained("RS2002/Skip-BART")
|
73 |
-
|
74 |
-
# Example input
|
75 |
-
x_encoder = torch.rand((2, 1024, 512))
|
76 |
-
x_decoder = torch.randint(0, 10, (2, 1024, 2))
|
77 |
-
encoder_attention_mask = torch.zeros((2, 1024))
|
78 |
-
decoder_attention_mask = torch.zeros((2, 1024))
|
79 |
-
|
80 |
-
# Forward pass
|
81 |
-
output = model(x_encoder, x_decoder, encoder_attention_mask, decoder_attention_mask)
|
82 |
-
print(output.last_hidden_state.size()) # Output: [2, 1024, 1024]
|
83 |
-
```
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- model_hub_mixin
|
4 |
+
- pytorch_model_hub_mixin
|
5 |
+
---
|
6 |
+
|
7 |
+
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
|
8 |
+
- Library: [More Information Needed]
|
9 |
+
- Docs: [More Information Needed]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
config.json
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"class_num": [
|
3 |
+
180,
|
4 |
+
256
|
5 |
+
],
|
6 |
+
"ffn_dims": 2048,
|
7 |
+
"heads": 8,
|
8 |
+
"hidden_size": 1024,
|
9 |
+
"layers": 8,
|
10 |
+
"max_position_embeddings": 1024,
|
11 |
+
"pretrain": false
|
12 |
+
}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bad4aa09fb0d034f7e10494b48a3260be4b727e110e9ebc79bfa5c2397149cf2
|
3 |
+
size 894159240
|