Training a seq2seq Model

#1
by Shamik - opened

Hello Sanchit,

Can you please tell me how i can trained a seq2seq model on a custom audio dataset ?
I have only finetuned Wav2vec2 by following the excellent blog by Patrick https://huggingface.co/blog/fine-tune-wav2vec2-english

Could you please point me to some resource so that i can learn and train a seq2seq model like you have done ?

Thank you

Hey @Shamik !

To fine-tune a seq2seq model in PyTorch, check out the examples script in Transformers: https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#sequence-to-sequence

The checkpoint you're seeing here was fine-tuned in JAX. To replicate, you can run the JAX scripts in the Seq2Seq-Speech repo: https://github.com/sanchit-gandhi/seq2seq-speech
These scripts will be integrated into Transformers shortly :-)

sanchit-gandhi changed discussion status to closed

Sign up or log in to comment