File size: 7,326 Bytes
78420b1
 
e5ca2be
 
 
 
 
 
2926bc9
 
e5ca2be
47c0905
 
 
 
 
2926bc9
 
 
 
 
e494194
 
 
 
 
 
 
 
2926bc9
 
 
 
 
 
 
 
 
 
e494194
2926bc9
47c0905
2926bc9
78420b1
e5ca2be
 
59a6d3c
 
 
 
e5ca2be
 
 
 
 
 
59a6d3c
 
e5ca2be
 
59a6d3c
e5ca2be
 
 
59a6d3c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e5ca2be
 
 
 
 
 
59a6d3c
e5ca2be
59a6d3c
e5ca2be
59a6d3c
 
 
e5ca2be
9500fd3
 
e5ca2be
59a6d3c
 
e5ca2be
59a6d3c
 
 
e5ca2be
 
 
 
 
59a6d3c
e5ca2be
 
 
 
563783c
e5ca2be
9500fd3
e5ca2be
 
 
 
563783c
e5ca2be
563783c
 
 
 
 
 
 
 
e5ca2be
 
 
 
59a6d3c
e5ca2be
 
 
 
59a6d3c
e5ca2be
59a6d3c
e5ca2be
 
 
9500fd3
 
e5ca2be
 
 
59a6d3c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e5ca2be
563783c
e5ca2be
 
 
 
 
 
 
 
9500fd3
59a6d3c
e5ca2be
 
563783c
 
 
 
 
 
 
 
 
 
e5ca2be
 
59a6d3c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
---
license: apache-2.0
datasets:
- Babelscape/multinerd
language:
- en
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
tags:
- ner
- named-entity-recognition
- token-classification
model-index:
- name: robert-base on MultiNERD by Jayant Yadav
  results:
  - task:
      type: named-entity-recognition-ner
      name: Named Entity Recognition
    dataset:
      type: Babelscape/multinerd
      name: MultiNERD (English)
      split: test
      revision: 2814b78e7af4b5a1f1886fe7ad49632de4d9dd25
      config: Babelscape/multinerd
      args:
        split: train[:50%]
    metrics:
    - type: f1
      value: 0.943
      name: F1
    - type: precision
      value: 0.939
      name: Precision
    - type: recall
      value: 0.947
      name: Recall
      config: seqeval
    paper: https://aclanthology.org/2022.findings-naacl.60.pdf
base_model: roberta-base
library_name: transformers
---
# Model Card for Model ID

[roBERTa-base](https://huggingface.co/roberta-base) model was fine-tuned on 50% training English only split of MultiNERD dataset and later evaluated on full test split of the same.   
The finetuning script can be fetched from [fintuning.py](https://github.com/jayant-yadav/RISE-NER/blob/main/finetuning.ipynb). 

Various other model were tested on the same selection of dataset and the best checkpoint was uploaded. The detailed configuration summary can be found in Appendix section of [report](https://github.com/jayant-yadav/RISE-NER/blob/main/MultiNERD_NER___RISE.pdf).


## Model Details

### Model Description

Head over to [github repo](https://github.com/jayant-yadav/RISE-NER) for all the scripts used to finetune and evalute token-classification model.
The model is ready to use on [Kaggle](https://www.kaggle.com/datasets/jayantyadav/multinerd-ner-models/) too!


- **Developed by:** Jayant Yadav

## Uses

Token-classification of the following entities are possible:  

| Class | Description | Examples |
|-------|-------------|----------|
PER (person) | People | Ray Charles, Jessica Alba, Leonardo DiCaprio, Roger Federer, Anna Massey. |
ORG (organization) | Associations, companies, agencies, institutions, nationalities and religious or political groups | University of Edinburgh, San Francisco Giants, Google, Democratic Party. |
LOC (location) | Physical locations (e.g. mountains, bodies of water), geopolitical entities (e.g. cities, states), and facilities (e.g. bridges, buildings, airports). | Rome, Lake Paiku, Chrysler Building, Mount Rushmore, Mississippi River. |
ANIM (animal) | Breeds of dogs, cats and other animals, including their scientific names. | Maine Coon, African Wild Dog, Great White Shark, New Zealand Bellbird. |
BIO (biological) | Genus of fungus, bacteria and protoctists, families of viruses, and other biological entities. | Herpes Simplex Virus, Escherichia Coli, Salmonella, Bacillus Anthracis. |
CEL (celestial) | Planets, stars, asteroids, comets, nebulae, galaxies and other astronomical objects. | Sun, Neptune, Asteroid 187 Lamberta, Proxima Centauri, V838 Monocerotis. |
DIS (disease) | Physical, mental, infectious, non-infectious, deficiency, inherited, degenerative, social and self-inflicted diseases. | Alzheimer’s Disease, Cystic Fibrosis, Dilated Cardiomyopathy, Arthritis. |
EVE (event) | Sport events, battles, wars and other events. | American Civil War, 2003 Wimbledon Championships, Cannes Film Festival. |
FOOD (food) | Foods and drinks. | Carbonara, Sangiovese, Cheddar Beer Fondue, Pizza Margherita. |
INST (instrument) | Technological instruments, mechanical instruments, musical instruments, and other tools. | Spitzer Space Telescope, Commodore 64, Skype, Apple Watch, Fender Stratocaster. |
MEDIA (media) | Titles of films, books, magazines, songs and albums, fictional characters and languages. | Forbes, American Psycho, Kiss Me Once, Twin Peaks, Disney Adventures. |
PLANT (plant) | Types of trees, flowers, and other plants, including their scientific names. | Salix, Quercus Petraea, Douglas Fir, Forsythia, Artemisia Maritima. |
MYTH (mythological) | Mythological and religious entities. | Apollo, Persephone, Aphrodite, Saint Peter, Pope Gregory I, Hercules. |
TIME (time) | Specific and well-defined time intervals, such as eras, historical periods, centuries, years and important days. No months and days of the week. | Renaissance, Middle Ages, Christmas, Great Depression, 17th Century, 2012. |
VEHI (vehicle) | Cars, motorcycles and other vehicles. | Ferrari Testarossa, Suzuki Jimny, Honda CR-X, Boeing 747, Fairey Fulmar.


## Bias, Risks, and Limitations

Only trained on English split of MultiNERD dataset. Therefore will not perform well on other languages.

## How to Get Started with the Model

Use the code below to get started with the model:

```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("jayant-yadav/roberta-base-multinerd")
model = AutoModelForTokenClassification.from_pretrained("jayant-yadav/roberta-base-multinerd")

nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"

ner_results = nlp(example)
print(ner_results)
```

## Training Details

### Training Data

50% of train split of MultiNERD dataset was used to finetune the model.

### Training Procedure 


#### Preprocessing 

English dataset was filterd out : ```train_dataset = train_dataset.filter(lambda x: x['lang'] == 'en')```


#### Training Hyperparameters

The following hyperparameters were used during training:  

learning_rate: 5e-05  
train_batch_size: 32  
eval_batch_size: 32  
seed: 42  
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08  
lr_scheduler_type: linear  
lr_scheduler_warmup_ratio: 0.1  
num_epochs: 1  


## Evaluation

Evaluation was perfored on 50% of evaluation split of MultiNERD dataset.

### Testing Data & Metrics


#### Testing Data

Tested on Full test split of MultiNERD dataset.


#### Metrics
Model versions and checkpoint were evaluated using F1, Precision and Recall.  
For this `seqeval` metric was used: ```metric = load_metric("seqeval")```.

### Results

|Entity | Precision | Recall | F1 score | Support |
|---|---|---|---|---|
|ANIM | 0.71 | 0.77 | 0.739 | 1604 |
|BIO | 0.5 | 0.125 | 0.2 | 8 |
|CEL | 0.738 | 0.756 | 0.746 | 41 |
|DIS | 0.737 | 0.772 | 0.754 | 759 |
|EVE | 0.952 | 0.968 | 0.960 | 352 |
|FOOD | 0.679 | 0.545 | 0.605 | 566 |
|INST | 0.75 | 0.75 | 0.75 | 12 |
|LOC | 0.994 | 0.991 | 0.993 | 12024 |
|MEDIA | 0.940 | 0.969 | 0.954 | 458 |
|ORG | 0.977 | 0.981 | 0.979 | 3309 |
|PER | 0.992 | 0.995 | 0.993 | 5265 |
|PLANT | 0.617 | 0.730 | 0.669 | 894 |
|MYTH | 0.647 | 0.687 | 0.666 | 32 |
|TIME | 0.825 | 0.820 | 0.822 | 289 |
|VEHI | 0.812 | 0.812 | 0.812 | 32 |
|**Overall** | **0.939** | **0.947** | **0.943** |

## Technical Specifications

### Model Architecture and Objective
Follows the same as RoBERTa-BASE

### Compute Infrastructure

#### Hardware

Kaggle - GPU T4x2  
Google Colab - GPU T4x1

#### Software
pandas==1.5.3  
numpy==1.23.5  
seqeval==1.2.2  
datasets==2.15.0  
huggingface_hub==0.19.4  
transformers[torch]==4.35.2  
evaluate==0.4.1  
matplotlib==3.7.1  
collections  
torch==2.0.0  

## Model Card Contact
[jayant-yadav](https://huggingface.co/jayant-yadav)