|
### [Domain Sensitive Fine-tuning](https://github.com/mykelismyname/MSLM):
|
|
|
|
Model is built by training BERT on a biomedical dataset BC2GM using an approach that learns mask specific losses.
|
|
More details in paper below,
|
|
|
|
#### Citation
|
|
```
|
|
@article{abaho2024improving,
|
|
title={Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER},
|
|
author={Abaho, Micheal and Bollegala, Danushka and Leeming, Gary and Joyce, Dan and Buchan, Iain E},
|
|
journal={arXiv preprint arXiv:2403.18025},
|
|
year={2024}
|
|
}
|
|
|