deberta-base-japanese-aozora

Model Description

This is a DeBERTa(V2) model pre-trained on 青空文庫 texts. NVIDIA A100-SXM4-40GB took 36 hours 44 minutes for training. You can fine-tune deberta-base-japanese-aozora for downstream tasks, such as POS-tagging, dependency-parsing, and so on.

How to Use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora")

Reference

安岡孝一: 青空文庫DeBERTaモデルによる国語研長単位係り受け解析, 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.

Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for KoichiYasuoka/deberta-base-japanese-aozora

Finetunes
4 models