Add links to lxyuan's uncased model
Browse files
README.md
CHANGED
@@ -89,6 +89,9 @@ pinned: true
|
|
89 |
|
90 |
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for multilingual Named Entity Recognition trained on the [MultiNERD](https://huggingface.co/datasets/Babelscape/multinerd) dataset. In particular, this SpanMarker model uses [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) as the underlying encoder. See [train.py](train.py) for the training script.
|
91 |
|
|
|
|
|
|
|
92 |
## Metrics
|
93 |
|
94 |
| **Language** | **Precision** | **Recall** | **F1** |
|
@@ -275,6 +278,7 @@ The following hyperparameters were used during training:
|
|
275 |
|
276 |
## See also
|
277 |
* [lxyuan/span-marker-bert-base-multilingual-cased-multinerd](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-cased-multinerd) is similar to this model, but trained on 3 epochs instead of 2. It reaches better performance on 7 out of the 10 languages.
|
|
|
278 |
|
279 |
## Contributions
|
280 |
Many thanks to [Simone Tedeschi](https://huggingface.co/sted97) from [Babelscape](https://babelscape.com) for his insight when training this model and his involvement in the creation of the training dataset.
|
|
|
89 |
|
90 |
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for multilingual Named Entity Recognition trained on the [MultiNERD](https://huggingface.co/datasets/Babelscape/multinerd) dataset. In particular, this SpanMarker model uses [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) as the underlying encoder. See [train.py](train.py) for the training script.
|
91 |
|
92 |
+
Is your data not (always) capitalized correctly? Then consider using this uncased variant of this model by [@lxyuan](https://huggingface.co/lxyuan) for better performance:
|
93 |
+
[lxyuan/span-marker-bert-base-multilingual-uncased-multinerd](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-uncased-multinerd).
|
94 |
+
|
95 |
## Metrics
|
96 |
|
97 |
| **Language** | **Precision** | **Recall** | **F1** |
|
|
|
278 |
|
279 |
## See also
|
280 |
* [lxyuan/span-marker-bert-base-multilingual-cased-multinerd](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-cased-multinerd) is similar to this model, but trained on 3 epochs instead of 2. It reaches better performance on 7 out of the 10 languages.
|
281 |
+
* [lxyuan/span-marker-bert-base-multilingual-uncased-multinerd](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-uncased-multinerd) is a strong uncased variant of this model, also trained on 3 epochs instead of 2.
|
282 |
|
283 |
## Contributions
|
284 |
Many thanks to [Simone Tedeschi](https://huggingface.co/sted97) from [Babelscape](https://babelscape.com) for his insight when training this model and his involvement in the creation of the training dataset.
|