JosephCatrambone commited on
Commit
96a19ef
·
verified ·
1 Parent(s): 81d1923

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - text-classification
4
+ - token-classification
5
+ language:
6
+ - en
7
+ ---
8
+
9
+ # tl;dr:
10
+
11
+ This is a dataset largely based on CleanCoNLL but with some augmentations.
12
+
13
+ # Details:
14
+
15
+ ## Base:
16
+ We started with the CoNLL-2003 dataset, a standard NER benchmark containing English and German text annotated with four entity types: person, location, organization, and miscellaneous. For our evaluation, we focused solely on examples containing the ORG (organization) entity, as these are most relevant to competitor detection.
17
+ We then applied corrections from CleanCoNLL, a 2023 revision by Rücker and Akbik that addresses annotation errors in the original CoNLL-2003. CleanCoNLL corrects 7.0% of labels in the English dataset, adds entity linking annotations, and maintains the original four entity types. This improved dataset enables more accurate evaluation, with top NER models achieving F1-scores up to 97.1%.
18
+
19
+ ## Augmentations:
20
+
21
+ We created two augmented datasets to test specific aspects of competitor detection:
22
+
23
+ Positive Dataset (with typographical errors):
24
+ We selected random examples and used the ORG entity as the "competitor" to be detected. We introduced typographical errors to the competitor names by:
25
+ - Omission: Removing a letter
26
+ - Transposition: Swapping two adjacent letters
27
+ - Substitution: Swapping a letter with one found nearby on a US ANSI keyboard layout
28
+ - Duplication: Selecting a character at random and doubling it
29
+
30
+ This dataset tests the guardrail's ability to detect variations of competitor names, which is particularly relevant as our solution does not implement fuzzy matching.
31
+
32
+ Negative Dataset (with distractors)
33
+ For the negative dataset, we used the original examples containing ORG entities but created a list of "competitors" by randomly selecting companies from the Fortune 500 index (2024), excluding the actual ORG entity in the text. We set the 'has_competitor' flag to 'false' for all examples in this dataset. This evaluates the guardrail's precision in avoiding false positives when no actual competitors are mentioned.
34
+
35
+ # Citations
36
+
37
+ ```
38
+ @inproceedings{rucker-akbik-2023-cleanconll,
39
+ title = "{C}lean{C}o{NLL}: A Nearly Noise-Free Named Entity Recognition Dataset",
40
+ author = {R{\"u}cker, Susanna and Akbik, Alan},
41
+ editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika",
42
+ booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
43
+ month = dec,
44
+ year = "2023",
45
+ address = "Singapore",
46
+ publisher = "Association for Computational Linguistics",
47
+ url = "https://aclanthology.org/2023.emnlp-main.533",
48
+ doi = "10.18653/v1/2023.emnlp-main.533",
49
+ pages = "8628--8645",
50
+ }
51
+
52
+ @misc{rücker2023cleanconll,
53
+ title={{C}lean{C}o{NLL}: A Nearly Noise-Free Named Entity Recognition Dataset},
54
+ author={Susanna R{\"u}cker and Alan Akbik},
55
+ year={2023},
56
+ eprint={2310.16225},
57
+ archivePrefix={arXiv},
58
+ primaryClass={cs.CL}
59
+ }
60
+ ```