|
--- |
|
task_categories: |
|
- text-classification |
|
- token-classification |
|
language: |
|
- en |
|
--- |
|
|
|
# tl;dr: |
|
|
|
This is a dataset largely based on CleanCoNLL with some augmentations. |
|
|
|
# Details: |
|
|
|
## Base: |
|
We started with the CoNLL-2003 dataset, a standard NER benchmark containing English and German text annotated with four entity types: person, location, organization, and miscellaneous. For our evaluation, we focused solely on examples containing the ORG (organization) entity, as these are most relevant to competitor detection. |
|
We then applied corrections from CleanCoNLL, a 2023 revision by Rücker and Akbik that addresses annotation errors in the original CoNLL-2003. CleanCoNLL corrects 7.0% of labels in the English dataset, adds entity linking annotations, and maintains the original four entity types. This improved dataset enables more accurate evaluation, with top NER models achieving F1-scores up to 97.1%. |
|
|
|
## Augmentations: |
|
|
|
We created two augmented datasets to test specific aspects of competitor detection: |
|
|
|
Positive Dataset (with typographical errors): |
|
We selected random examples and used the ORG entity as the "competitor" to be detected. We introduced typographical errors to the competitor names by: |
|
- Omission: Removing a letter |
|
- Transposition: Swapping two adjacent letters |
|
- Substitution: Swapping a letter with one found nearby on a US ANSI keyboard layout |
|
- Duplication: Selecting a character at random and doubling it |
|
|
|
This dataset tests the guardrail's ability to detect variations of competitor names, which is particularly relevant as our solution does not implement fuzzy matching. |
|
|
|
Negative Dataset (with distractors) |
|
For the negative dataset, we used the original examples containing ORG entities but created a list of "competitors" by randomly selecting companies from the Fortune 500 index (2024), excluding the actual ORG entity in the text. We set the 'has_competitor' flag to 'false' for all examples in this dataset. This evaluates the guardrail's precision in avoiding false positives when no actual competitors are mentioned. |
|
|
|
# Citations |
|
|
|
``` |
|
@inproceedings{rucker-akbik-2023-cleanconll, |
|
title = "{C}lean{C}o{NLL}: A Nearly Noise-Free Named Entity Recognition Dataset", |
|
author = {R{\"u}cker, Susanna and Akbik, Alan}, |
|
editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", |
|
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", |
|
month = dec, |
|
year = "2023", |
|
address = "Singapore", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://aclanthology.org/2023.emnlp-main.533", |
|
doi = "10.18653/v1/2023.emnlp-main.533", |
|
pages = "8628--8645", |
|
} |
|
|
|
@misc{rücker2023cleanconll, |
|
title={{C}lean{C}o{NLL}: A Nearly Noise-Free Named Entity Recognition Dataset}, |
|
author={Susanna R{\"u}cker and Alan Akbik}, |
|
year={2023}, |
|
eprint={2310.16225}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |