File size: 1,311 Bytes
17687c9 08b1b81 a909c7b 08b1b81 91a88e2 a351d7d 08b1b81 a351d7d 08b1b81 a351d7d 08b1b81 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
license: apache-2.0
---
# InformBERT
## Introduction
InformBERT is pretrained using variable masking strategy, where informative tokens are masked more frequently compared to other tokens. InformBERT outperforms random masking based pretrained models on the factual recall benchmark LAMA and extractive question answering benchmark SQuAD.
More detail: https://arxiv.org/abs/2210.11771
## How to load
```Python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("nsadeq/InformBERT")
model = AutoModel.from_pretrained("nsadeq/InformBERT")
from transformers import pipeline
unmasker = pipeline('fill-mask', model='nsadeq/InformBERT',tokenizer=tokenizer)
unmasker("SpeedWeek is an American television program on [MASK].")
```
## Citation
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11771,
doi = {10.48550/ARXIV.2210.11771},
url = {https://arxiv.org/abs/2210.11771},
author = {Sadeq, Nafis and Xu, Canwen and McAuley, Julian},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {InforMask: Unsupervised Informative Masking for Language Model Pretraining},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |