Nishant24 commited on
Commit
c42c1fe
·
1 Parent(s): 404c6af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -14
README.md CHANGED
@@ -12,21 +12,11 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # mbart-finetuned-hi-to-en_Siddha_Yoga_Text_by_Nishant
14
 
15
- This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
16
 
17
- ## Model description
18
-
19
- More information needed
20
-
21
- ## Intended uses & limitations
22
-
23
- More information needed
24
-
25
- ## Training and evaluation data
26
-
27
- More information needed
28
-
29
- ## Training procedure
30
 
31
  ### Training hyperparameters
32
 
 
12
 
13
  # mbart-finetuned-hi-to-en_Siddha_Yoga_Text_by_Nishant
14
 
15
+ This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt).
16
 
17
+ This model is a fine-tuned checkpoint bart-large-50-man-to-many-mmt fine-tuned for Siddha Yoga Hindi to English translation. It was introduced in Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper:https://arxiv.org/pdf/2008.00401.pdf
18
+ The model can translate directly between any pair of languages. To translate the target language, the target language ID is forced as the first generated token. To force the target language as the first generated token, pass the forced_bos_token_id parameter to the generated model.
19
+ This model was fine-tuned as part of the Dissertation project in Data Science at BITS PILANI by Nishant Chhetri. Code to use the model for inference:
 
 
 
 
 
 
 
 
 
 
20
 
21
  ### Training hyperparameters
22