ykhwang commited on
Commit
5c179d6
·
1 Parent(s): 6bfdc95

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ As same as 42dot-PLM, the model is built upon a Transformer decoder architecture
22
 
23
  | Params | Layers | Attention heads | Hidden size | FFN size | Max. length\* |
24
  | -- | -- | -- | -- | -- | -- |
25
- | 1.3B | 24 | 32 | 2,048 | 5,632 | 8,192 |
26
 
27
  (\* unit: tokens)
28
  ### Supervised Fine-tuning
 
22
 
23
  | Params | Layers | Attention heads | Hidden size | FFN size | Max. length\* |
24
  | -- | -- | -- | -- | -- | -- |
25
+ | 1.3B | 24 | 32 | 2,048 | 5,632 | 4,096 |
26
 
27
  (\* unit: tokens)
28
  ### Supervised Fine-tuning