jaygala24 commited on
Commit
e21c277
·
verified ·
1 Parent(s): a1b6d2c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - ai4bharat/indic-instruct-data-v0.1
4
+ language:
5
+ - en
6
+ - hi
7
+ license: llama2
8
+ tags:
9
+ - multilingual
10
+ - instruction-tuning
11
+ - llama2
12
+ ---
13
+
14
+ # Airavata
15
+
16
+ This model is a 7B OpenHathi model finetuned on [IndicInstruct dataset](https://huggingface.co/datasets/ai4bharat/indic-instruct-data-v0.1)
17
+ which is a collection of instruction datasets (Anudesh, wikiHow, Flan v2, Dolly, Anthropic-HHH, OpenAssistant v1, and LymSys-Chat).
18
+ Please check the corresponding huggingface dataset card for more details.
19
+
20
+ This was trained as part of the blog [Introducing Airavata: Hindi Instruction-tuned Chat Model](https://ai4bharat.github.io/airavata).
21
+ The codebase used to train and evaluate this model can be found at [https://github.com/AI4Bharat/IndicInstruct](https://github.com/AI4Bharat/IndicInstruct).
22
+
23
+
24
+
25
+ ## Usage
26
+
27
+ Clone [https://github.com/AI4Bharat/IndicInstruct](https://github.com/AI4Bharat/IndicInstruct) and install the required dependencies. Then download or clone this model to the same machine.
28
+
29
+ ## Input Format
30
+
31
+ The model is trained to use the chat format similar to [Wang et al. 2023](https://arxiv.org/abs/2306.04751) ([code repository](https://github.com/allenai/open-instruct)) (note the newlines):
32
+ ```
33
+ <|user|>
34
+ Your message here!
35
+ <|assistant|>
36
+ ```
37
+
38
+ For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
39
+
40
+ ## Hyperparameters
41
+
42
+ We fine-tune OpenHathi base model on the aforementioned IndicInstruct dataset with LoRA. The hyperparameters for the LoRA fine-tuning are listed below:
43
+ - LoRA Rank: 16
44
+ - LoRA alpha: 32
45
+ - LoRA Dropout: 0.05
46
+ - LoRA Target Modules: ["q_proj", "v_proj", "down_proj", "gate_proj", "up_proj", "k_proj"]
47
+ - Epochs: 4
48
+ - Learning rate: 5e-4
49
+ - Batch Size: 128
50
+ - Floating Point Precision: bfloat16
51
+
52
+ We recommend the readers to check out [our official blog post](https://ai4bharat.github.io/airavata) for more details on the model training, ablations and evaluation results.
53
+
54
+ ## Citation
55
+
56
+ ```bibtex
57
+ @misc{airavata2024,
58
+ title = {Introducing Airavata: Hindi Instruction-tuned Chat Model},
59
+ url = {https://ai4bharat.github.io/airavata},
60
+ author = {Jay Gala and Thanmay Jayakumar and Jaavid Aktar Husain and Aswanth Kumar and Mohammed Safi Ur Rahman Khan and Diptesh Kanojia and Ratish Puduppully and Mitesh Khapra and Raj Dabre and Rudra Murthy and Anoop Kunchukuttan},
61
+ month = {January},
62
+ year = {2024}
63
+ }
64
+ ```