File size: 1,664 Bytes
d69dba4
 
 
 
 
 
 
 
 
 
 
 
a2ffe41
d69dba4
 
0fda306
 
 
 
 
 
d69dba4
 
 
 
 
b8b43df
d69dba4
b8b43df
d69dba4
 
 
 
27f0101
 
 
b3416f2
eae29b9
b3416f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
language: vi
tags:
- vi
- vietnamese
- gpt2
- text-generation
- lm
- nlp
datasets:
- oscar
widget:
- text: "Việt Nam là quốc gia có"
---

# GPT-2

Pretrained model on Vietnamese language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).

# How to use the model

~~~~
from transformers import GPT2Tokenizer, AutoModelForCausalLM

tokenizer = GPT2Tokenizer.from_pretrained("NlpHUST/gpt2-vietnamese")

model = AutoModelForCausalLM.from_pretrained("NlpHUST/gpt2-vietnamese")
~~~~

# Model architecture
A 12-layer, 768-hidden-size transformer-based language model.

# Training
The model was trained on Vietnamese Oscar dataset (32 GB) to optimize a traditional language modelling objective on v3-8 TPU for around 6 days. It reaches around 13.4 perplexity on a chosen validation set from Oscar.

### GPT-2 Finetuning

The following example fine-tunes GPT-2 on WikiText-2. We're using the raw WikiText-2 (no tokens were replaced before
the tokenization). The loss here is that of causal language modeling.

The script [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) .

```bash
python run_clm.py \
    --model_name_or_path NlpHUST/gpt2-vietnamese \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
    --per_device_train_batch_size 8 \
    --per_device_eval_batch_size 8 \
    --do_train \
    --do_eval \
    --output_dir /tmp/test-clm
```