SpanMarker
This is a SpanMarker model trained on the DFKI-SLT/few-nerd dataset that can be used for Named Entity Recognition.
Model Details
Model Description
- Model Type: SpanMarker
- Maximum Sequence Length: 256 tokens
- Maximum Entity Length: 8 words
- Training Dataset: DFKI-SLT/few-nerd
Model Sources
- Repository: SpanMarker on GitHub
- Thesis: SpanMarker For Named Entity Recognition
Model Labels
Label | Examples |
---|---|
art | "The Seven Year Itch", "Time", "Imelda de ' Lambertazzi" |
building | "Henry Ford Museum", "Sheremetyevo International Airport", "Boston Garden" |
event | "French Revolution", "Iranian Constitutional Revolution", "Russian Revolution" |
location | "Croatian", "the Republic of Croatia", "Mediterranean Basin" |
organization | "IAEA", "Church 's Chicken", "Texas Chicken" |
other | "Amphiphysin", "N-terminal lipid", "BAR" |
person | "Edmund Payne", "Ellaline Terriss", "Hicks" |
product | "100EX", "Phantom", "Corvettes - GT1 C6R" |
Evaluation
Metrics
Label | Precision | Recall | F1 |
---|---|---|---|
all | 0.7789 | 0.7634 | 0.7711 |
art | 0.7610 | 0.7256 | 0.7429 |
building | 0.6316 | 0.6857 | 0.6575 |
event | 0.6304 | 0.5346 | 0.5786 |
location | 0.8114 | 0.8554 | 0.8328 |
organization | 0.7370 | 0.68 | 0.7074 |
other | 0.7407 | 0.6085 | 0.6682 |
person | 0.8611 | 0.9035 | 0.8818 |
product | 0.704 | 0.5966 | 0.6459 |
Uses
Direct Use for Inference
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Run inference
entities = model.predict("Caretaker manager George Goss led them on a run in the FA Cup, defeating Liverpool in round 4, to reach the semi-final at Stamford Bridge, where they were defeated 2–0 by Sheffield United on 28 March 1925.")
Downstream Use
You can finetune this model on your own dataset.
Click to expand
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span_marker_model_id-finetuned")
Training Details
Training Set Metrics
Training set | Min | Median | Max |
---|---|---|---|
Sentence length | 1 | 24.4956 | 163 |
Entities per sentence | 0 | 2.5439 | 35 |
Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training Results
Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
---|---|---|---|---|---|---|
0.1629 | 200 | 0.0335 | 0.6884 | 0.6223 | 0.6537 | 0.9062 |
0.3259 | 400 | 0.0238 | 0.7412 | 0.7193 | 0.7301 | 0.9242 |
0.4888 | 600 | 0.0220 | 0.7628 | 0.7378 | 0.7501 | 0.9325 |
0.6517 | 800 | 0.0211 | 0.7614 | 0.7677 | 0.7645 | 0.9376 |
0.8147 | 1000 | 0.0197 | 0.7839 | 0.7596 | 0.7716 | 0.9384 |
0.9776 | 1200 | 0.0194 | 0.7803 | 0.7633 | 0.7717 | 0.9393 |
Framework Versions
- Python: 3.10.12
- SpanMarker: 1.5.0
- Transformers: 4.37.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.17.1
- Tokenizers: 0.15.2
Citation
BibTeX
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Dataset used to train Pratik-B/span-marker-bert-base-fewnerd-coarse-super
Evaluation results
- F1 on Unknowntest set self-reported0.771
- Precision on Unknowntest set self-reported0.779
- Recall on Unknowntest set self-reported0.763