File size: 8,271 Bytes
87dfd80 b2b97cc 87dfd80 b2b97cc 526d457 b2b97cc 5011bfb b2b97cc 5011bfb 9e589a4 5011bfb b2b97cc b64e6bb b2b97cc 4fd6b3a b2b97cc 87dfd80 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
license: apache-2.0
language:
- es
tags:
- medical
---
# UMLS-KGI-BERT-ES
<!-- Provide a quick summary of what the model is/does. -->
This is a BERT encoder trained on the Spanish-language section of the European Clinical Case corpus as well as the UMLS metathesaurus knowledge graph, as described in [this paper](https://aclanthology.org/2023.clinicalnlp-1.35/).
The training corpus consists of a custom combination of clinical documents from the E3C and text sequences derived from the metathesaurus (see our [Github repo](https://github.com/ap-mannion/bertify-umls) for more details).
## Model Details
This model was trained using a multi-task approach combining Masked Language Modelling with knowledge-graph-based classification/fill-mask type objectives.
The idea behind this framework was to try to improve the robustness of specialised biomedical BERT models by having them learn from structured data as well as natural language, while remaining in the cross-entropy-based learning paradigm.
- **Developed by:** Aidan Mannion
- **Funded by :** GENCI-IDRIS grant AD011013535R1
- **Model type:** DistilBERT
- **Language(s) (NLP):** Spanish
For further details on the model architecture, training objectives, hardware \& software used, as well as the preliminary downstream evaluation experiments carried out, refer to the [ArXiv paper](https://arxiv.org/abs/2307.11170).
### UMLS-KGI Models
| **Model** | **Model Repo** | **Dataset Size** | **Base Architecture** | **Base Model** | **Total KGI training steps** |
|:--------------------------:|:--------------------------------------------------------------------------:|:----------------:|:---------------------:|:---------------------------------------------------------------------------------------------:|:----------------------------:|
| UMLS-KGI-BERT-multilingual | [url-multi](https://huggingface.co/ap-mannion/umls-kgi-bert-multilingual) | 940MB | DistilBERT | n/a | 163,904 |
| UMLS-KGI-BERT-FR | [url-fr](https://huggingface.co/ap-mannion/umls-kgi-bert-fr) | 604MB | DistilBERT | n/a | 126,720 |
| UMLS-KGI-BERT-EN | [url-en](https://huggingface.co/ap-mannion/umls-kgi-bert-en) | 174MB | DistilBERT | n/a | 19,008 |
| UMLS-KGI-BERT-ES | [url-es](https://huggingface.co/ap-mannion/umls-kgi-bert-es) | 162MB | DistilBERT | n/a | 18,176 |
| DrBERT-UMLS-KGI | [url-drbert](https://huggingface.co/ap-mannion/drbert-umls-kgi) | 604MB | CamemBERT/RoBERTa | [DrBERT-4GB](https://huggingface.co/Dr-BERT/DrBERT-4GB) | 126,720 |
| PubMedBERT-UMLS-KGI | [url-pubmedbert](https://huggingface.co/ap-mannion/pubmedbert-umls-kgi) | 174MB | BERT | microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract | 19,008 |
| BioRoBERTa-ES-UMLS-KGI | [url-bioroberta](https://huggingface.co/ap-mannion/bioroberta-es-umls-kgi) | 162MB | RoBERTa | [RoBERTa-base-biomedical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es) | 18,176 |
### Direct/Downstream Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is intended for use in experimental clinical/biomedical NLP work, either as a part of a larger system requiring text encoding or fine-tuned on a specific downstream task requiring clinical language modelling.
It has **not** been sufficiently tested for accuracy, robustness and bias to be used in production settings.
### Out-of-Scope Use
Experiments on general-domain data suggest that, given it's specialised training corpus, this model is **not** suitable for use on out-of-domain NLP tasks, and we recommend that it only be used for processing clinical text.
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
- [European Clinical Case Corpus](https://live.european-language-grid.eu/catalogue/corpus/7618)
- [UMLS Metathesaurus](https://www.nlm.nih.gov/research/umls/index.html)
#### Training Hyperparameters
- sequence length: 256
- learning rate 7.5e-5
- linear learning rate schedule with 10,770 warmup steps
- effective batch size 1500 (15 sequences per batch x 100 gradient accumulation steps)
- MLM masking probability 0.15
**Training regime:** The model was trained with fp16 non-mixed precision, using the AdamW optimizer with default parameters.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
## Citation [BibTeX]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@inproceedings{mannion-etal-2023-umls,
title = "{UMLS}-{KGI}-{BERT}: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition",
author = "Mannion, Aidan and
Schwab, Didier and
Goeuriot, Lorraine",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.35",
pages = "312--322",
abstract = "Pre-trained transformer language models (LMs) have in recent years become the dominant paradigm in applied NLP. These models have achieved state-of-the-art performance on tasks such as information extraction, question answering, sentiment analysis, document classification and many others. In the biomedical domain, significant progress has been made in adapting this paradigm to NLP tasks that require the integration of domain-specific knowledge as well as statistical modelling of language. In particular, research in this area has focused on the question of how best to construct LMs that take into account not only the patterns of token distribution in medical text, but also the wealth of structured information contained in terminology resources such as the UMLS. This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.This allows for graph-based learning objectives to be combined with masked-language pre-training. Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks. All pre-trained models, data processing pipelines and evaluation scripts will be made publicly available.",
}
```
```
@misc{mannion2023umlskgibert,
title={UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition},
author={Aidan Mannion and Thierry Chevalier and Didier Schwab and Lorraine Geouriot},
year={2023},
eprint={2307.11170},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |