RichardErkhov commited on
Commit
5586143
ยท
verified ยท
1 Parent(s): 85f7b92

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ mptk-1b - bnb 8bits
11
+ - Model creator: https://huggingface.co/team-lucid/
12
+ - Original model: https://huggingface.co/team-lucid/mptk-1b/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: apache-2.0
20
+ language:
21
+ - ko
22
+ ---
23
+ # MPTK-1B
24
+
25
+ MPTK-1B๋Š” ํ•œ๊ตญ์–ด/์˜์–ด์ฝ”๋“œ ๋ฐ์ดํ„ฐ์…‹์—์„œ ํ•™์Šต๋œ 1.3B ํŒŒ๋ผ๋ฏธํ„ฐ์˜ decoder-only transformer ์–ธ์–ด๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.
26
+
27
+ ์ด ๋ชจ๋ธ์€ ๊ตฌ๊ธ€์˜ [TPU Research Cloud(TRC)](https://sites.research.google/trc/about/)๋ฅผ ํ†ตํ•ด ์ง€์›๋ฐ›์€ Cloud TPU๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
28
+
29
+ ## Model Details
30
+
31
+ ### Model Description
32
+
33
+ ๋‹ค๋ฅธ decoder-only transformer์—์„œ ์ผ๋ถ€ ์ˆ˜์ •๋œ ์•„ํ‚คํ…์ฒ˜์ธ MPT๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค.
34
+
35
+ - [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค
36
+ - bias๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.
37
+
38
+ | Hyperparameter | Value |
39
+ |-----------------|-------|
40
+ | n_parameters | 1.3B |
41
+ | n_layers | 24 |
42
+ | n_heads | 16 |
43
+ | d_model | 2048 |
44
+ | vocab size | 50432 |
45
+ | sequence length | 2048 |
46
+
47
+ ## Uses
48
+
49
+ ## How to Get Started with the Model
50
+
51
+ fp16์œผ๋กœ ์‹คํ–‰ ์‹œ NaN์ด ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ fp32 ํ˜น์€ bf16๋กœ ์‹คํ–‰ํ•˜๊ธฐ๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค.
52
+
53
+ ```python
54
+ import torch
55
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
56
+
57
+ tokenizer = AutoTokenizer.from_pretrained("team-lucid/mptk-1b")
58
+ model = AutoModelForCausalLM.from_pretrained("team-lucid/mptk-1b")
59
+
60
+ pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
61
+
62
+ with torch.autocast('cuda', dtype=torch.bfloat16):
63
+ print(
64
+ pipe(
65
+ '๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š”',
66
+ max_new_tokens=100,
67
+ do_sample=True,
68
+ )
69
+ )
70
+
71
+ ```
72
+
73
+ ## Training Details
74
+
75
+ ### Training Data
76
+
77
+ [OSCAR](https://oscar-project.org/), mC4, wikipedia, namuwiki ๋“ฑ ํ•œ๊ตญ์–ด
78
+ ๋ฐ์ดํ„ฐ์— [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), [The Stack](https://huggingface.co/datasets/bigcode/the-stack)
79
+ ์—์„œ ์ผ๋ถ€๋ฅผ ์ถ”๊ฐ€ํ•ด ํ•™์Šตํ•˜์˜€์Šต๋‹ˆ๋‹ค.
80
+
81
+ #### Training Hyperparameters
82
+
83
+ | **Hyperparameter** | **Value** |
84
+ |--------------------|------------|
85
+ | Precision | bfloat16 |
86
+ | Optimizer | Lion |
87
+ | Learning rate | 2e-4 |
88
+ | Batch size | 1024 |
89
+
90
+