YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Skip-BART

The description is generated by Grok3.

Model Details

Model Description

Skip-BART is a transformer-based model built on the Bidirectional and Auto-Regressive Transformers (BART) architecture, designed for automatic stage lighting control. It generates lighting sequences synchronized with music input, treating stage lighting as a generative task. The model processes music data in an octuple format and outputs lighting control parameters, leveraging a skip-connection-enhanced BART structure for improved performance.

  • Architecture: BART with skip connections
  • Input Format: Encoder input (batch_size, length, 512), decoder input (batch_size, length, 2), attention masks (batch_size, length)
  • Output Format: Hidden states of dimension [batch_size, length, 1024]
  • Hidden Size: 1024
  • Training Objective: Pre-training on music data, followed by fine-tuning for lighting sequence generation
  • Tasks Supported: Stage lighting sequence generation

Training Data

The model was trained on the RPMC-L2 dataset:

  • Dataset Source: RPMC-L2
  • Description: Contains music and corresponding stage lighting data in a format suitable for training Skip-BART.
  • Details: Refer to the paper for dataset specifics.

Usage

Installation

git clone https://huggingface.co/RS2002/Skip-BART

Example Code

import torch
from model import Skip_BART

# Load the model
model = Skip_BART.from_pretrained("RS2002/Skip-BART")

# Example input
x_encoder = torch.rand((2, 1024, 512))
x_decoder = torch.randint(0, 10, (2, 1024, 2))
encoder_attention_mask = torch.zeros((2, 1024))
decoder_attention_mask = torch.zeros((2, 1024))

# Forward pass
output = model(x_encoder, x_decoder, encoder_attention_mask, decoder_attention_mask)
print(output.size())  # Output: [2, 1024, 1024]
Downloads last month
6
Safetensors
Model size
224M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support