wannaphong commited on
Commit
ab27367
·
verified ·
1 Parent(s): 9397e4a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - th
6
+ library_name: transformers
7
+ pipeline_tag: text-generation
8
+ ---
9
+ # NumFa v2 (1B)
10
+
11
+ NumFa v2 1B is a LLM pretrained that has 1B.
12
+
13
+ Base model: TinyLLama
14
+
15
+ **For testing only**
16
+
17
+ ## Model Details
18
+
19
+ ### Model Description
20
+
21
+ The model was trained by TPU.
22
+
23
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
24
+
25
+ - **Developed by:** NumFa
26
+ - **Model type:** text-generation
27
+ - **Language(s) (NLP):** English
28
+ - **License:** apache-2.0
29
+
30
+
31
+ ### Out-of-Scope Use
32
+
33
+ Math, Coding, and other language
34
+
35
+
36
+ ## Bias, Risks, and Limitations
37
+
38
+ The model can has a bias from dataset. Use at your own risks!
39
+
40
+ ## How to Get Started with the Model
41
+
42
+ Use the code below to get started with the model.
43
+
44
+ **Example**
45
+
46
+ ```python
47
+ # !pip install accelerate sentencepiece transformers bitsandbytes
48
+ import torch
49
+ from transformers import pipeline
50
+
51
+ pipe = pipeline("text-generation", model="numfa/numfa_v2-1b", torch_dtype=torch.bfloat16, device_map="auto")
52
+
53
+ # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
54
+
55
+ outputs = pipe("test is", max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.)
56
+ print(outputs[0]["generated_text"])
57
+ ```