audichandra commited on
Commit
59c76a2
·
verified ·
1 Parent(s): 187b35b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +163 -1
README.md CHANGED
@@ -4,4 +4,166 @@ datasets:
4
  - audichandra/bitext_customer_support_llm_dataset_indonesian
5
  language:
6
  - id
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - audichandra/bitext_customer_support_llm_dataset_indonesian
5
  language:
6
  - id
7
+ ---
8
+
9
+ ![Gajah_7-B](https://huggingface.co/audichandra/Gajah-7B/blob/main/img/gajah_7b.jpg)
10
+
11
+ ## Quick Intro
12
+
13
+ Gajah-7B is the 1st iteration of Indonesian AI chatbot with [Merak-7B](https://huggingface.co/Ichsan2895/Merak-7B-v4) as the base model that is trained with PEFT Qlora method and Indonesian version of [bitext](https://huggingface.co/datasets/audichandra/bitext_customer_support_llm_dataset_indonesian) customer support dataset for LLM.
14
+
15
+ Gajah-7B is licensed under [MIT](https://opensource.org/license/mit) license to support the open source initiative and served as another example of how to finetune pre-trained model.
16
+
17
+ you can contact me through my [LinkedIn](www.linkedin.com/in/audichandra) or [Github](https://github.com/audichandra/Indonesian_AI_Chatbot_Customer_Support) about this model and its applications.
18
+
19
+ ## Installation
20
+
21
+ We need at least Python 3.10 and PyTorch 2, and do a pip install of the requirements.txt along with some optional pip install features such as flash attention:
22
+
23
+ ```bash
24
+ pip install flash-attn
25
+ ```
26
+
27
+ ## GPU requirements
28
+
29
+ **Training**: 8x A40
30
+ **Loading**: 1x RTX A500
31
+ *notes: the author trains and loads the model on Cloud GPU platform such as runpods*
32
+
33
+ ## Scripts
34
+
35
+ **Scripts for loading model using multiple GPU**
36
+
37
+ ```bash
38
+ import torch
39
+ import time
40
+ from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM, AutoConfig, LlamaTokenizer, BitsAndBytesConfig
41
+ from peft import PeftModel, PeftConfig
42
+
43
+ #BNB_CONFIG = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4")
44
+ model_chat = "audichandra/Gajah-7B"
45
+ model1 = AutoModelForCausalLM.from_pretrained(model_chat
46
+ , torch_dtype=torch.bfloat16, device_map="auto", pad_token_id=0
47
+ , attn_implementation="flash_attention_2"
48
+ , cache_dir="/workspace"
49
+ #, quantization_config=BNB_CONFIG
50
+ )
51
+
52
+ tokenizer = LlamaTokenizer.from_pretrained(model_chat)
53
+
54
+ def generate_response(question: str) -> str:
55
+ chat = [
56
+ {"role": "system", "content": "Ada yang bisa saya bantu?"},
57
+ {"role": "user", "content": question},
58
+ ]
59
+
60
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
61
+
62
+ inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=True)
63
+
64
+ with torch.no_grad():
65
+ outputs = model1.generate(input_ids=inputs["input_ids"].to("cuda"),
66
+ attention_mask=inputs.attention_mask,
67
+ eos_token_id=tokenizer.eos_token_id,
68
+ pad_token_id=tokenizer.eos_token_id,
69
+ max_new_tokens=512)
70
+ response = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
71
+
72
+ assistant_start = f'''{question} \n assistant\n '''
73
+ response_start = response.find(assistant_start)
74
+ return response[response_start + len(assistant_start) :].strip()
75
+
76
+ start_time = time.time()
77
+ prompt = "bagaimana saya dapat membatalkan pembelian saya?"
78
+ print(generate_response(prompt))
79
+
80
+ end_time = time.time()
81
+ elapsed_time = end_time - start_time
82
+ print(f"Elapsed time: {elapsed_time} seconds")
83
+ ```
84
+
85
+ *you can uncomment the bnbconfig to do a 4-bit quantization to run it with lower VRAM but the results might suffer in terms of quality and time*
86
+
87
+ **Scripts for loading model using single GPU**
88
+
89
+ ```bash
90
+ import torch
91
+ import time
92
+ from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM, AutoConfig, LlamaTokenizer, BitsAndBytesConfig
93
+ from peft import PeftModel, PeftConfig
94
+
95
+ #BNB_CONFIG = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4")
96
+ #model_save_path1 = "/workspace/axolotl/merged_model"
97
+ model_chat = "audichandra/Gajah-7B"
98
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
99
+ model1 = AutoModelForCausalLM.from_pretrained(model_chat
100
+ , torch_dtype=torch.bfloat16
101
+ #, device_map="auto", pad_token_id=0
102
+ #, attn_implementation="flash_attention_2"
103
+ , cache_dir="/workspace"
104
+ #, quantization_config=BNB_CONFIG
105
+ ).to(device)
106
+ tokenizer = LlamaTokenizer.from_pretrained(model_chat)
107
+
108
+ def generate_response(question: str) -> str:
109
+ chat = [
110
+ {"role": "system", "content": "Ada yang bisa saya bantu?"},
111
+ {"role": "user", "content": question},
112
+ ]
113
+
114
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
115
+ inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=True)
116
+
117
+ inputs = inputs.to(device) # Ensure inputs are on the same device as the model
118
+
119
+ with torch.no_grad():
120
+ outputs = model1.generate(**inputs, max_new_tokens=512)
121
+ response = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
122
+
123
+ assistant_start = f'''{question} \n assistant\n '''
124
+ response_start = response.find(assistant_start)
125
+ return response[response_start + len(assistant_start) :].strip()
126
+
127
+
128
+ # Use the functions together
129
+ start_time = time.time()
130
+ prompt = "bagaimana saya dapat membatalkan pembelian saya?"
131
+ print(generate_response(prompt))
132
+
133
+ end_time = time.time()
134
+ elapsed_time = end_time - start_time
135
+ print(f"Elapsed time: {elapsed_time} seconds")
136
+
137
+ ```
138
+
139
+ *some features such as flash attention might not work on single GPU*
140
+
141
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
142
+
143
+ ## Citation
144
+
145
+ ```bash
146
+ @article{Merak,
147
+ title={Merak-7B: The LLM for Bahasa Indonesia},
148
+ author={Muhammad Ichsan},
149
+ publisher={Hugging Face}
150
+ journal={Hugging Face Repository},
151
+ year={2023}
152
+ }
153
+
154
+ @article{dettmers2023qlora,
155
+ title = {QLoRA: Efficient Finetuning of Quantized LLMs},
156
+ author = {Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
157
+ journal = {arXiv preprint arXiv:2305.14314},
158
+ year = {2023}
159
+ }
160
+
161
+ @article{axolotl,
162
+ author = {{OpenAccess AI Collective}},
163
+ title = {Axolotl: A Repository for AI Research and Development},
164
+ year = {2023},
165
+ publisher = {GitHub},
166
+ journal = {GitHub repository},
167
+ howpublished = {\url{https://github.com/OpenAccess-AI-Collective/axolotl}}
168
+ }
169
+ ```