1TuanPham commited on
Commit
4832b64
·
verified ·
1 Parent(s): 0a5391e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -13,7 +13,10 @@ language:
13
 
14
  - **Developed by:** Tuan Pham (FPTU HCM Student)
15
  - **Model type:** Llama2-7B Decoder-only
16
- - **Finetuned from model :** meta-llama/Llama-2-7b, bkai-foundation-models/vietnamese-llama2-7b-120GB, yeen214/llama2_7b_merge_orcafamily.
 
 
 
17
  - **Bilingual support :** English and Vietnamese
18
 
19
  ### Model Sources
@@ -49,7 +52,7 @@ Use the code below to get started with the model.
49
  from torch.cuda.amp import autocast
50
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, pipeline
51
 
52
- model_name = "1TuanPham/InstructEnVi_llama2-bkai-120GB-Orcafamily_250kx3.37_350kx1.1"
53
  model = AutoModelForCausalLM.from_pretrained(model_name,
54
  torch_dtype=torch.bfloat16,
55
  use_cache=True,
 
13
 
14
  - **Developed by:** Tuan Pham (FPTU HCM Student)
15
  - **Model type:** Llama2-7B Decoder-only
16
+ - **Finetuned from model :**
17
+ * meta-llama/Llama-2-7b
18
+ * bkai-foundation-models/vietnamese-llama2-7b-120GB
19
+ * yeen214/llama2_7b_merge_orcafamily.
20
  - **Bilingual support :** English and Vietnamese
21
 
22
  ### Model Sources
 
52
  from torch.cuda.amp import autocast
53
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, pipeline
54
 
55
+ model_name = "1TuanPham/T-Llama-v1.1"
56
  model = AutoModelForCausalLM.from_pretrained(model_name,
57
  torch_dtype=torch.bfloat16,
58
  use_cache=True,