|
---
|
|
language: ja
|
|
license: cc-by-sa-4.0
|
|
datasets:
|
|
- Hazumi
|
|
---
|
|
|
|
# ouktlab/Hazumi-AffNeg-Classifier
|
|
|
|
## Model description
|
|
|
|
This is a Japanese fine-tuned [BERT](https://github.com/google-research/bert) model trained on exchange data
|
|
(Yes/No questions from the system and corresponding user responses)
|
|
extracted from the multimodal dialogue corpus Hazumi.
|
|
The pre-trained BERT model used is [cl-tohoku/bert-base-japanese-v3](https://huggingface.co/tohoku-nlp/bert-base-japanese-v3), released by Tohoku University.
|
|
For fine-tuning, the JNLI script from [JGLUE](https://github.com/yahoojapan/JGLUE) was employed.
|
|
|
|
## Training procedure
|
|
|
|
This model was fine-tuned using the following script, which was borrowed from the JNLI script in [JGLUE](https://github.com/yahoojapan/JGLUE).
|
|
|
|
```
|
|
python transformers-4.9.2/examples/pytorch/text-classification/run_glue.py \
|
|
--model_name_or_path tohoku-nlp/bert-base-japanese-v3 \
|
|
--metric_name wnli \
|
|
--do_train --do_eval --do_predict \
|
|
--max_seq_length 128 \
|
|
--per_device_train_batch_size 8 \
|
|
--learning_rate 5e-05 \
|
|
--num_train_epochs 4 \
|
|
--output_dir <output_dir> \
|
|
--train_file <train json file> \
|
|
--validation_file <train json file> \
|
|
--test_file <train json file> \
|
|
--use_fast_tokenizer False \
|
|
--evaluation_strategy epoch \
|
|
--save_steps 5000 \
|
|
--warmup_ratio 0.1
|
|
```
|
|
|