bert-base-japanese-char-extended

Model Description

This is a BERT model pre-trained on Japanese Wikipedia texts, derived from bert-base-japanese-char-v2. Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune bert-base-japanese-char-extended for downstream tasks, such as POS-tagging, dependency-parsing, and so on.

How to Use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-char-extended")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/bert-base-japanese-char-extended")
Downloads last month
23
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for KoichiYasuoka/bert-base-japanese-char-extended

Finetuned
(1)
this model
Finetunes
3 models

Space using KoichiYasuoka/bert-base-japanese-char-extended 1