Commit
·
747c748
1
Parent(s):
7933201
Update parquet files
Browse files- .gitattributes +0 -29
- README.md +0 -165
- train.jsonl → extraction/semeval2010-test.parquet +2 -2
- test.jsonl → extraction/semeval2010-train.parquet +2 -2
- generation/semeval2010-test.parquet +3 -0
- generation/semeval2010-train.parquet +3 -0
- raw/semeval2010-test.parquet +3 -0
- raw/semeval2010-train.parquet +3 -0
- semeval2010.py +0 -153
.gitattributes
DELETED
@@ -1,29 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
test.jsonl filter=lfs diff=lfs merge=lfs -text
|
29 |
-
train.jsonl filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,165 +0,0 @@
|
|
1 |
-
|
2 |
-
A dataset for benchmarking keyphrase extraction and generation techniques from long document english scientific articles. For more details about the dataset please refer the original paper - [https://dl.acm.org/doi/10.5555/1859664.1859668](https://dl.acm.org/doi/10.5555/1859664.1859668)
|
3 |
-
|
4 |
-
Original source of the data - [https://github.com/boudinfl/semeval-2010-pre](https://github.com/boudinfl/semeval-2010-pre)
|
5 |
-
|
6 |
-
## Dataset Summary
|
7 |
-
The Semeval-2010 dataset was originally proposed by *Su Nam Kim et al* in the paper titled - [SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific Articles](https://aclanthology.org/S10-1004.pdf) in the year 2010. The dataset consists of a set of 284 English scientific papers from the ACM Digital Library (conference and work-shop papers). The selected articles belong to the
|
8 |
-
following four 1998 ACM classifications: C2.4 (Distributed Systems), H3.3 (Information Search and Retrieval), I2.11 (Distributed Artificial Intelligence – Multiagent Systems) and J4 (Socialand Behavioral Sciences – Economics). Each paper has two sets of keyphrases annotated by readers and author. The original dataset was divided into trail, training and test splits, evenly distributed across four domains. The trial, training and test splits had 40, 144 and 100 articles respectively, and the trial split was a subset of training split. We provide test and train splits with 100 and 144 articles respectively.
|
9 |
-
|
10 |
-
The dataset shared over here categorizes the keyphrases into *extractive* and *abstractive*. **Extractive keyphrases** are those that could be found in the input text and the **abstractive keyphrases** are those that are not present in the input text. In order to get all the meta-data about the documents and keyphrases please refer to the [original source](https://github.com/boudinfl/semeval-2010-pre) from which the dataset was taken from. The main motivation behind making this dataset available in the form as presented over here is to make it easy for the researchers to programmatically download it and evaluate their models for the tasks of keyphrase extraction and generation. As keyphrase extraction by treating it as a sequence tagging task and using contextual language models has become popular - [Keyphrase extraction from scholarly articles as sequence labeling using contextualized embeddings](https://arxiv.org/pdf/1910.08840.pdf), we have also made the token tags available in the BIO tagging format.
|
11 |
-
|
12 |
-
|
13 |
-
## Dataset Structure
|
14 |
-
|
15 |
-
|
16 |
-
### Data Fields
|
17 |
-
|
18 |
-
- **id**: unique identifier of the document.
|
19 |
-
- **document**: Whitespace separated list of words in the document.
|
20 |
-
- **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
|
21 |
-
- **extractive_keyphrases**: List of all the present keyphrases.
|
22 |
-
- **abstractive_keyphrase**: List of all the absent keyphrases.
|
23 |
-
|
24 |
-
|
25 |
-
### Data Splits
|
26 |
-
|
27 |
-
|Split| #datapoints |
|
28 |
-
|--|--|
|
29 |
-
| Test | 100 |
|
30 |
-
| Train | 144 |
|
31 |
-
|
32 |
-
Train
|
33 |
-
|
34 |
-
- Percentage of keyphrases that are named entities: 63.01% (named entities detected using scispacy - en-core-sci-lg model)
|
35 |
-
- Percentage of keyphrases that are noun phrases: 82.50% (noun phrases detected using spacy en-core-web-lg after removing determiners)
|
36 |
-
|
37 |
-
Test
|
38 |
-
|
39 |
-
- Percentage of keyphrases that are named entities: 62.06% (named entities detected using scispacy - en-core-sci-lg model)
|
40 |
-
- Percentage of keyphrases that are noun phrases: 78.36% (noun phrases detected using spacy after removing determiners)
|
41 |
-
|
42 |
-
|
43 |
-
## Usage
|
44 |
-
|
45 |
-
### Full Dataset
|
46 |
-
|
47 |
-
```python
|
48 |
-
from datasets import load_dataset
|
49 |
-
|
50 |
-
# get entire dataset
|
51 |
-
dataset = load_dataset("midas/semeval2010", "raw")
|
52 |
-
|
53 |
-
# sample from the train split
|
54 |
-
print("Sample from train dataset split")
|
55 |
-
test_sample = dataset["train"][0]
|
56 |
-
print("Fields in the sample: ", [key for key in test_sample.keys()])
|
57 |
-
print("Tokenized Document: ", test_sample["document"])
|
58 |
-
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
|
59 |
-
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
|
60 |
-
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
|
61 |
-
print("\n-----------\n")
|
62 |
-
|
63 |
-
# sample from the test split
|
64 |
-
print("Sample from test dataset split")
|
65 |
-
test_sample = dataset["test"][0]
|
66 |
-
print("Fields in the sample: ", [key for key in test_sample.keys()])
|
67 |
-
print("Tokenized Document: ", test_sample["document"])
|
68 |
-
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
|
69 |
-
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
|
70 |
-
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
|
71 |
-
print("\n-----------\n")
|
72 |
-
```
|
73 |
-
**Output**
|
74 |
-
|
75 |
-
```bash
|
76 |
-
Sample from train dataset split
|
77 |
-
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
|
78 |
-
Tokenized Document: ['HITS', 'on', 'the', 'Web:', 'How', 'does', 'it', 'Compare?', 'Marc', 'Najork', 'Microsoft', 'Research', '1065', 'La', 'Avenida', 'Mountain', 'View,', 'CA,', 'USA', '[email protected]', 'Hugo', 'Zaragoza', '∗', 'Yahoo!', 'Research', 'Barcelona', 'Ocata', '1', 'Barcelona', '08003,', 'Spain', '[email protected]', 'Michael', 'Taylor', 'Microsoft', 'Research', '7', 'J', 'J', 'Thompson', 'Ave', 'Cambridge', 'CB3', '0FB,', 'UK', '[email protected]', 'ABSTRACT', 'This', 'paper', 'describes', 'a', 'large-scale', 'evaluation', 'of', 'the', 'effectiveness', 'of', 'HITS', 'in', 'comparison', 'with', 'other', 'link-based', 'ranking', 'algorithms,', 'when', 'used', 'in', 'combination', 'with', 'a', 'state-ofthe-art', 'text', 'retrieval', 'algorithm', 'exploiting', 'anchor', 'text.', 'We', 'quantified', 'their', 'effectiveness', 'using', 'three', 'common', 'performance', 'measures:', 'the', 'mean', 'reciprocal', 'rank,', 'the', 'mean', 'average', 'precision,', 'and', 'the', 'normalized', 'discounted', 'cumulative', 'gain', 'measurements.', 'The', 'evaluation', 'is', 'based', 'on', 'two', 'large', 'data', 'sets:', 'a', 'breadth-first', 'search', 'crawl', 'of', '463', 'million', 'web', 'pages', 'containing', '17.6', 'billion', 'hyperlinks', 'and', 'referencing', '2.9', 'billion', 'distinct', 'URLs;', 'and', 'a', 'set', 'of', '28,043', 'queries', 'sampled', 'from', 'a', 'query', 'log,', 'each', 'query', 'having', 'on', 'average', '2,383', 'results,', 'about', '17', 'of', 'which', 'were', 'labeled', 'by', 'judges.', 'We', 'found', 'that', 'HITS', 'outperforms', 'PageRank,', 'but', 'is', 'about', 'as', 'effective', 'as', 'web-page', 'in-degree.', 'The', 'same', 'holds', 'true', 'when', 'any', 'of', 'the', 'link-based', 'features', 'are', 'combined', 'with', 'the', 'text', 'retrieval', 'algorithm.', 'Finally,', 'we', 'studied', 'the', 'relationship', 'between', 'query', 'specificity', 'and', 'the', 'effectiveness', 'of', 'selected', 'features,', 'and', 'found', 'that', 'link-based', 'features', 'perform', 'better', 'for', 'general', 'queries,', 'whereas', 'BM25F', 'performs', 'better', 'for', 'specific', 'queries.', 'Categories', 'and', 'Subject', 'Descriptors', 'H.3.3', '[Information', 'Search', 'and', 'Retrieval]:', 'Information', 'Storage', 'and', 'Retrieval-search', 'process,', 'selection', 'process', 'General', 'Terms', 'Algorithms,', 'Measurement,', 'Experimentation', '1.', 'INTRODUCTION', 'Link', 'graph', 'features', 'such', 'as', 'in-degree', 'and', 'PageRank', 'have', 'been', 'shown', 'to', 'significantly', 'improve', 'the', 'performance', 'of', 'text', 'retrieval', 'algorithms', 'on', 'the', 'web.', 'The', 'HITS', 'algorithm', 'is', 'also', 'believed', 'to', 'be', 'of', 'interest', 'for', 'web', 'search;', 'to', 'some', 'degree,', 'one', 'may', 'expect', 'HITS', 'to', 'be', 'more', 'informative', 'that', 'other', 'link-based', 'features', 'because', 'it', 'is', 'query-dependent:', 'it', 'tries', 'to', 'measure', 'the', 'interest', 'of', 'pages', 'with', 'respect', 'to', 'a', 'given', 'query.', 'However,', 'it', 'remains', 'unclear', 'today', 'whether', 'there', 'are', 'practical', 'benefits', 'of', 'HITS', 'over', 'other', 'link', 'graph', 'measures.', 'This', 'is', 'even', 'more', 'true', 'when', 'we', 'consider', 'that', 'modern', 'retrieval', 'algorithms', 'used', 'on', 'the', 'web', 'use', 'a', 'document', 'representation', 'which', 'incorporates', 'the', 'document"s', 'anchor', 'text,', 'i.e.', 'the', 'text', 'of', 'incoming', 'links.', 'This,', 'at', 'least', 'to', 'some', 'degree,', 'takes', 'the', 'link', 'graph', 'into', 'account,', 'in', 'a', 'query-dependent', 'manner.', 'Comparing', 'HITS', 'to', 'PageRank', 'or', 'in-degree', 'empirically', 'is', 'no', 'easy', 'task.', 'There', 'are', 'two', 'main', 'difficulties:', 'scale', 'and', 'relevance.', 'Scale', 'is', 'important', 'because', 'link-based', 'features', 'are', 'known', 'to', 'improve', 'in', 'quality', 'as', 'the', 'document', 'graph', 'grows.', 'If', 'we', 'carry', 'out', 'a', 'small', 'experiment,', 'our', 'conclusions', 'won"t', 'carry', 'over', 'to', 'large', 'graphs', 'such', 'as', 'the', 'web.', 'However,', 'computing', 'HITS', 'efficiently', 'on', 'a', 'graph', 'the', 'size', 'of', 'a', 'realistic', 'web', 'crawl', 'is', 'extraordinarily', 'difficult.', 'Relevance', 'is', 'also', 'crucial', 'because', 'we', 'cannot', 'measure', 'the', 'performance', 'of', 'a', 'feature', 'in', 'the', 'absence', 'of', 'human', 'judgments:', 'what', 'is', 'crucial', 'is', 'ranking', 'at', 'the', 'top', 'of', 'the', 'ten', 'or', 'so', 'documents', 'that', 'a', 'user', 'will', 'peruse.', 'To', 'our', 'knowledge,', 'this', 'paper', 'is', 'the', 'first', 'attempt', 'to', 'evaluate', 'HITS', 'at', 'a', 'large', 'scale', 'and', 'compare', 'it', 'to', 'other', 'link-based', 'features', 'with', 'respect', 'to', 'human', 'evaluated', 'judgment.', 'Our', 'results', 'confirm', 'many', 'of', 'the', 'intuitions', 'we', 'have', 'about', 'link-based', 'features', 'and', 'their', 'relationship', 'to', 'text', 'retrieval', 'methods', 'exploiting', 'anchor', 'text.', 'This', 'is', 'reassuring:', 'in', 'the', 'absence', 'of', 'a', 'theoretical', 'model', 'capable', 'of', 'tying', 'these', 'measures', 'with', 'relevance,', 'the', 'only', 'way', 'to', 'validate', 'our', 'intuitions', 'is', 'to', 'carry', 'out', 'realistic', 'experiments.', 'However,', 'we', 'were', 'quite', 'surprised', 'to', 'find', 'that', 'HITS,', 'a', 'query-dependent', 'feature,', 'is', 'about', 'as', 'effective', 'as', 'web', 'page', 'in-degree,', 'the', 'most', 'simpleminded', 'query-independent', 'link-based', 'feature.', 'This', 'continues', 'to', 'be', 'true', 'when', 'the', 'link-based', 'features', 'are', 'combined', 'with', 'a', 'text', 'retrieval', 'algorithm', 'exploiting', 'anchor', 'text.', 'The', 'remainder', 'of', 'this', 'paper', 'is', 'structured', 'as', 'follows:', 'Section', '2', 'surveys', 'related', 'work.', 'Section', '3', 'describes', 'the', 'data', 'sets', 'we', 'used', 'in', 'our', 'study.', 'Section', '4', 'reviews', 'the', 'performance', 'measures', 'we', 'used.', 'Sections', '5', 'and', '6', 'describe', 'the', 'PageRank', 'and', 'HITS', 'algorithms', 'in', 'more', 'detail,', 'and', 'sketch', 'the', 'computational', 'infrastructure', 'we', 'employed', 'to', 'carry', 'out', 'large', 'scale', 'experiments.', 'Section', '7', 'presents', 'the', 'results', 'of', 'our', 'evaluations,', 'and', 'Section', '8', 'offers', 'concluding', 'remarks.', '2.', 'RELATED', 'WORK', 'The', 'idea', 'of', 'using', 'hyperlink', 'analysis', 'for', 'ranking', 'web', 'search', 'results', 'arose', 'around', '1997,', 'and', 'manifested', 'itself', 'in', 'the', 'HITS', '[16,', '17]', 'and', 'PageRank', '[5,', '21]', 'algorithms.', 'The', 'popularity', 'of', 'these', 'two', 'algorithms', 'and', 'the', 'phenomenal', 'success', 'of', 'the', 'Google', 'search', 'engine,', 'which', 'uses', 'PageRank,', 'have', 'spawned', 'a', 'large', 'amount', 'of', 'subsequent', 'research.', 'There', 'are', 'numerous', 'attempts', 'at', 'improving', 'the', 'effectiveness', 'of', 'HITS', 'and', 'PageRank.', 'Query-dependent', 'link-based', 'ranking', 'algorithms', 'inspired', 'by', 'HITS', 'include', 'SALSA', '[19],', 'Randomized', 'HITS', '[20],', 'and', 'PHITS', '[7],', 'to', 'name', 'a', 'few.', 'Query-independent', 'link-based', 'ranking', 'algorithms', 'inspired', 'by', 'PageRank', 'include', 'TrafficRank', '[22],', 'BlockRank', '[14],', 'and', 'TrustRank', '[11],', 'and', 'many', 'others.', 'Another', 'line', 'of', 'research', 'is', 'concerned', 'with', 'analyzing', 'the', 'mathematical', 'properties', 'of', 'HITS', 'and', 'PageRank.', 'For', 'example,', 'Borodin', 'et', 'al.', '[3]', 'investigated', 'various', 'theoretical', 'properties', 'of', 'PageRank,', 'HITS,', 'SALSA,', 'and', 'PHITS,', 'including', 'their', 'similarity', 'and', 'stability,', 'while', 'Bianchini', 'et', 'al.', '[2]', 'studied', 'the', 'relationship', 'between', 'the', 'structure', 'of', 'the', 'web', 'graph', 'and', 'the', 'distribution', 'of', 'PageRank', 'scores,', 'and', 'Langville', 'and', 'Meyer', 'examined', 'basic', 'properties', 'of', 'PageRank', 'such', 'as', 'existence', 'and', 'uniqueness', 'of', 'an', 'eigenvector', 'and', 'convergence', 'of', 'power', 'iteration', '[18].', 'Given', 'the', 'attention', 'that', 'has', 'been', 'paid', 'to', 'improving', 'the', 'effectiveness', 'of', 'PageRank', 'and', 'HITS,', 'and', 'the', 'thorough', 'studies', 'of', 'the', 'mathematical', 'properties', 'of', 'these', 'algorithms,', 'it', 'is', 'somewhat', 'surprising', 'that', 'very', 'few', 'evaluations', 'of', 'their', 'effectiveness', 'have', 'been', 'published.', 'We', 'are', 'aware', 'of', 'two', 'studies', 'that', 'have', 'attempted', 'to', 'formally', 'evaluate', 'the', 'effectiveness', 'of', 'HITS', 'and', 'of', 'PageRank.', 'Amento', 'et', 'al.', '[1]', 'employed', 'quantitative', 'measures,', 'but', 'based', 'their', 'experiments', 'on', 'the', 'result', 'sets', 'of', 'just', '5', 'queries', 'and', 'the', 'web-graph', 'induced', 'by', 'topical', 'crawls', 'around', 'the', 'result', 'set', 'of', 'each', 'query.', 'A', 'more', 'recent', 'study', 'by', 'Borodin', 'et', 'al.', '[4]', 'is', 'based', 'on', '34', 'queries,', 'result', 'sets', 'of', '200', 'pages', 'per', 'query', 'obtained', 'from', 'Google,', 'and', 'a', 'neighborhood', 'graph', 'derived', 'by', 'retrieving', '50', 'in-links', 'per', 'result', 'from', 'Google.', 'By']
|
79 |
-
Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
|
80 |
-
Extractive/present Keyphrases: ['ranking', 'pagerank', 'mean reciprocal rank', 'mean average precision', 'query specificity', 'link graph', 'scale and relevance', 'hyperlink analysis', 'rank', 'bm25f', 'mrr', 'map', 'ndcg']
|
81 |
-
Abstractive/absent Keyphrases: ['normalized discounted cumulative gain measurement', 'breadth-first search crawl', 'feature selection', 'link-based feature', 'quantitative measure', 'crawled web page', 'hit']
|
82 |
-
|
83 |
-
-----------
|
84 |
-
|
85 |
-
Sample from test dataset split
|
86 |
-
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
|
87 |
-
Tokenized Document: ['Live', 'Data', 'Center', 'Migration', 'across', 'WANs:', 'A', 'Robust', 'Cooperative', 'Context', 'Aware', 'Approach', 'K.K.', 'Ramakrishnan,', 'Prashant', 'Shenoy', ',', 'Jacobus', 'Van', 'der', 'Merwe', 'AT&T', 'Labs-Research', '/', 'University', 'of', 'Massachusetts', 'ABSTRACT', 'A', 'significant', 'concern', 'for', 'Internet-based', 'service', 'providers', 'is', 'the', 'continued', 'operation', 'and', 'availability', 'of', 'services', 'in', 'the', 'face', 'of', 'outages,', 'whether', 'planned', 'or', 'unplanned.', 'In', 'this', 'paper', 'we', 'advocate', 'a', 'cooperative,', 'context-aware', 'approach', 'to', 'data', 'center', 'migration', 'across', 'WANs', 'to', 'deal', 'with', 'outages', 'in', 'a', 'non-disruptive', 'manner.', 'We', 'specifically', 'seek', 'to', 'achieve', 'high', 'availability', 'of', 'data', 'center', 'services', 'in', 'the', 'face', 'of', 'both', 'planned', 'and', 'unanticipated', 'outages', 'of', 'data', 'center', 'facilities.', 'We', 'make', 'use', 'of', 'server', 'virtualization', 'technologies', 'to', 'enable', 'the', 'replication', 'and', 'migration', 'of', 'server', 'functions.', 'We', 'propose', 'new', 'network', 'functions', 'to', 'enable', 'server', 'migration', 'and', 'replication', 'across', 'wide', 'area', 'networks', '(e.g.,', 'the', 'Internet),', 'and', 'finally', 'show', 'the', 'utility', 'of', 'intelligent', 'and', 'dynamic', 'storage', 'replication', 'technology', 'to', 'ensure', 'applications', 'have', 'access', 'to', 'data', 'in', 'the', 'face', 'of', 'outages', 'with', 'very', 'tight', 'recovery', 'point', 'objectives.', 'Categories', 'and', 'Subject', 'Descriptors', 'C.2.4', '[Computer-Communication', 'Networks]:', 'Distributed', 'Systems', 'General', 'Terms', 'Design,', 'Reliability', '1.', 'INTRODUCTION', 'A', 'significant', 'concern', 'for', 'Internet-based', 'service', 'providers', 'is', 'the', 'continued', 'operation', 'and', 'availability', 'of', 'services', 'in', 'the', 'face', 'of', 'outages,', 'whether', 'planned', 'or', 'unplanned.', 'These', 'concerns', 'are', 'exacerbated', 'by', 'the', 'increased', 'use', 'of', 'the', 'Internet', 'for', 'mission', 'critical', 'business', 'and', 'real-time', 'entertainment', 'applications.', 'A', 'relatively', 'minor', 'outage', 'can', 'disrupt', 'and', 'inconvenience', 'a', 'large', 'number', 'of', 'users.', 'Today', 'these', 'services', 'are', 'almost', 'exclusively', 'hosted', 'in', 'data', 'centers.', 'Recent', 'advances', 'in', 'server', 'virtualization', 'technologies', '[8,', '14,', '22]', 'allow', 'for', 'the', 'live', 'migration', 'of', 'services', 'within', 'a', 'local', 'area', 'network', '(LAN)', 'environment.', 'In', 'the', 'LAN', 'environment,', 'these', 'technologies', 'have', 'proven', 'to', 'be', 'a', 'very', 'effective', 'tool', 'to', 'enable', 'data', 'center', 'management', 'in', 'a', 'non-disruptive', 'fashion.', 'Not', 'only', 'can', 'it', 'support', 'planned', 'maintenance', 'events', '[8],', 'but', 'it', 'can', 'also', 'be', 'used', 'in', 'a', 'more', 'dynamic', 'fashion', 'to', 'automatically', 'balance', 'load', 'between', 'the', 'physical', 'servers', 'in', 'a', 'data', 'center', '[22].', 'When', 'using', 'these', 'technologies', 'in', 'a', 'LAN', 'environment,', 'services', 'execute', 'in', 'a', 'virtual', 'server,', 'and', 'the', 'migration', 'services', 'provided', 'by', 'the', 'underlying', 'virtualization', 'framework', 'allows', 'for', 'a', 'virtual', 'server', 'to', 'be', 'migrated', 'from', 'one', 'physical', 'server', 'to', 'another,', 'without', 'any', 'significant', 'downtime', 'for', 'the', 'service', 'or', 'application.', 'In', 'particular,', 'since', 'the', 'virtual', 'server', 'retains', 'the', 'same', 'network', 'address', 'as', 'before,', 'any', 'ongoing', 'network', 'level', 'interactions', 'are', 'not', 'disrupted.', 'Similarly,', 'in', 'a', 'LAN', 'environment,', 'storage', 'requirements', 'are', 'normally', 'met', 'via', 'either', 'network', 'attached', 'storage', '(NAS)', 'or', 'via', 'a', 'storage', 'area', 'network', '(SAN)', 'which', 'is', 'still', 'reachable', 'from', 'the', 'new', 'physical', 'server', 'location', 'to', 'allow', 'for', 'continued', 'storage', 'access.', 'Unfortunately', 'in', 'a', 'wide', 'area', 'environment', '(WAN),', 'live', 'server', 'migration', 'is', 'not', 'as', 'easily', 'achievable', 'for', 'two', 'reasons:', 'First,', 'live', 'migration', 'requires', 'the', 'virtual', 'server', 'to', 'maintain', 'the', 'same', 'network', 'address', 'so', 'that', 'from', 'a', 'network', 'connectivity', 'viewpoint', 'the', 'migrated', 'server', 'is', 'indistinguishable', 'from', 'the', 'original.', 'While', 'this', 'is', 'fairly', 'easily', 'achieved', 'in', 'a', 'shared', 'LAN', 'environment,', 'no', 'current', 'mechanisms', 'are', 'available', 'to', 'efficiently', 'achieve', 'the', 'same', 'feat', 'in', 'a', 'WAN', 'environment.', 'Second,', 'while', 'fairly', 'sophisticated', 'remote', 'replication', 'mechanisms', 'have', 'been', 'developed', 'in', 'the', 'context', 'of', 'disaster', 'recovery', '[20,', '7,', '11],', 'these', 'mechanisms', 'are', 'ill', 'suited', 'to', 'live', 'data', 'center', 'migration,', 'because', 'in', 'general', 'the', 'available', 'technologies', 'are', 'unaware', 'of', 'application/service', 'level', 'semantics.', 'In', 'this', 'paper', 'we', 'outline', 'a', 'design', 'for', 'live', 'service', 'migration', 'across', 'WANs.', 'Our', 'design', 'makes', 'use', 'of', 'existing', 'server', 'virtualization', 'technologies', 'and', 'propose', 'network', 'and', 'storage', 'mechanisms', 'to', 'facilitate', 'migration', 'across', 'a', 'WAN.', 'The', 'essence', 'of', 'our', 'approach', 'is', 'cooperative,', 'context', 'aware', 'migration,', 'where', 'a', 'migration', 'management', 'system', 'orchestrates', 'the', 'data', 'center', 'migration', 'across', 'all', 'three', 'subsystems', 'involved,', 'namely', 'the', 'server', 'platforms,', 'the', 'wide', 'area', 'network', 'and', 'the', 'disk', 'storage', 'system.', 'While', 'conceptually', 'similar', 'in', 'nature', 'to', 'the', 'LAN', 'based', 'work', 'described', 'above,', 'using', 'migration', 'technologies', 'across', 'a', 'wide', 'area', 'network', 'presents', 'unique', 'challenges', 'and', 'has', 'to', 'our', 'knowledge', 'not', 'been', 'achieved.', 'Our', 'main', 'contribution', 'is', 'the', 'design', 'of', 'a', 'framework', 'that', 'will', 'allow', 'the', 'migration', 'across', 'a', 'WAN', 'of', 'all', 'subsystems', 'involved', 'with', 'enabling', 'data', 'center', 'services.', 'We', 'describe', 'new', 'mechanisms', 'as', 'well', 'as', 'extensions', 'to', 'existing', 'technologies', 'to', 'enable', 'this', 'and', 'outline', 'the', 'cooperative,', 'context', 'aware', 'functionality', 'needed', 'across', 'the', 'different', 'subsystems', 'to', 'enable', 'this.', '262', '2.', 'LIVE', 'DATA', 'CENTER', 'MIGRATION', 'ACROSS', 'WANS', 'Three', 'essential', 'subsystems', 'are', 'involved', 'with', 'hosting', 'services', 'in', 'a', 'data', 'center:', 'First,', 'the', 'servers', 'host', 'the', 'application', 'or', 'service', 'logic.', 'Second,', 'services', 'are', 'normally', 'hosted', 'in', 'a', 'data', 'center', 'to', 'provide', 'shared', 'access', 'through', 'a', 'network,', 'either', 'the', 'Internet', 'or', 'virtual', 'private', 'networks', '(VPNs).', 'Finally,', 'most', 'applications', 'require', 'disk', 'storage', 'for', 'storing', 'data', 'and', 'the', 'amount', 'of', 'disk', 'space', 'and', 'the', 'frequency', 'of', 'access', 'varies', 'greatly', 'between', 'different', 'services/applications.', 'Disruptions,', 'failures,', 'or', 'in', 'general,', 'outages', 'of', 'any', 'kind', 'of', 'any', 'of', 'these', 'components', 'will', 'cause', 'service', 'disruption.', 'For', 'this', 'reason,', 'prior', 'work', 'and', 'current', 'practices', 'have', 'addressed', 'the', 'robustness', 'of', 'individual', 'components.', 'For', 'example,', 'data', 'centers', 'typically', 'have', 'multiple', 'network', 'connections', 'and', 'redundant', 'LAN', 'devices', 'to', 'ensure', 'redundancy', 'at', 'the', 'networking', 'level.', 'Similarly,', 'physical', 'servers', 'are', 'being', 'designed', 'with', 'redundant', 'hot-swappable', 'components', '(disks,', 'processor', 'blades,', 'power', 'supplies', 'etc).', 'Finally,', 'redundancy', 'at', 'the', 'storage', 'level', 'can', 'be', 'provided', 'through', 'sophisticated', 'data', 'mirroring', 'technologies.', 'The', 'focus', 'of', 'our', 'work,', 'however,', 'is', 'on', 'the', 'case', 'where', 'such', 'local', 'redundancy', 'mechanisms', 'are', 'not', 'sufficient.', 'Specifically,', 'we', 'are', 'interested', 'in', 'providing', 'service', 'availability', 'when', 'the', 'data', 'center', 'as', 'a', 'whole', 'becomes', 'unavailable,', 'for', 'example', 'because', 'of', 'data', 'center', 'wide', 'maintenance', 'operations,', 'or', 'because', 'of', 'catastrophic', 'events.', 'As', 'such,', 'our', 'basic', 'approach', 'is', 'to', 'migrate', 'services', 'between', 'data', 'centers', 'across', 'the', 'wide', 'are', 'network', '(WAN).', 'By', 'necessity,', 'moving', 'or', 'migrating', 'services', 'from', 'one', 'data', 'center', 'to', 'another', 'needs', 'to', 'consider', 'all', 'three', 'of', 'these', 'components.', 'Historically,', 'such', 'migration', 'has', 'been', 'disruptive', 'in', 'nature,', 'requiring', 'downtime', 'of', 'the', 'actual', 'services', 'involved,', 'or', 'requiring', 'heavy', 'weight', 'replication', 'techniques.', 'In', 'the', 'latter', 'case', 'concurrently', 'running', 'replicas', 'of', 'a', 'service', 'can', 'be', 'made', 'available', 'thus', 'allowing', 'a', 'subset', 'of', 'the', 'service', 'to', 'be', 'migrated', 'or', 'maintained', 'without', 'impacting', 'the', 'service']
|
88 |
-
Document BIO Tags: ['O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
|
89 |
-
Extractive/present Keyphrases: ['data center migration', 'wan', 'lan', 'virtual server', 'storage replication', 'synchronous replication', 'asynchronous replication', 'network support', 'storage', 'voip']
|
90 |
-
Abstractive/absent Keyphrases: ['internet-based service', 'voice-over-ip', 'database']
|
91 |
-
|
92 |
-
-----------
|
93 |
-
```
|
94 |
-
|
95 |
-
### Keyphrase Extraction
|
96 |
-
```python
|
97 |
-
from datasets import load_dataset
|
98 |
-
|
99 |
-
# get the dataset only for keyphrase extraction
|
100 |
-
dataset = load_dataset("midas/semeval2010", "extraction")
|
101 |
-
|
102 |
-
print("Samples for Keyphrase Extraction")
|
103 |
-
|
104 |
-
# sample from the train split
|
105 |
-
print("Sample from train data split")
|
106 |
-
test_sample = dataset["train"][0]
|
107 |
-
print("Fields in the sample: ", [key for key in test_sample.keys()])
|
108 |
-
print("Tokenized Document: ", test_sample["document"])
|
109 |
-
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
|
110 |
-
print("\n-----------\n")
|
111 |
-
|
112 |
-
# sample from the test split
|
113 |
-
print("Sample from test data split")
|
114 |
-
test_sample = dataset["test"][0]
|
115 |
-
print("Fields in the sample: ", [key for key in test_sample.keys()])
|
116 |
-
print("Tokenized Document: ", test_sample["document"])
|
117 |
-
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
|
118 |
-
print("\n-----------\n")
|
119 |
-
```
|
120 |
-
|
121 |
-
### Keyphrase Generation
|
122 |
-
```python
|
123 |
-
# get the dataset only for keyphrase generation
|
124 |
-
dataset = load_dataset("midas/semeval2010", "generation")
|
125 |
-
|
126 |
-
print("Samples for Keyphrase Generation")
|
127 |
-
|
128 |
-
# sample from the train split
|
129 |
-
print("Sample from train data split")
|
130 |
-
test_sample = dataset["train"][0]
|
131 |
-
print("Fields in the sample: ", [key for key in test_sample.keys()])
|
132 |
-
print("Tokenized Document: ", test_sample["document"])
|
133 |
-
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
|
134 |
-
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
|
135 |
-
print("\n-----------\n")
|
136 |
-
|
137 |
-
# sample from the test split
|
138 |
-
print("Sample from test data split")
|
139 |
-
test_sample = dataset["test"][0]
|
140 |
-
print("Fields in the sample: ", [key for key in test_sample.keys()])
|
141 |
-
print("Tokenized Document: ", test_sample["document"])
|
142 |
-
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
|
143 |
-
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
|
144 |
-
print("\n-----------\n")
|
145 |
-
```
|
146 |
-
|
147 |
-
## Citation Information
|
148 |
-
```
|
149 |
-
@inproceedings{10.5555/1859664.1859668,
|
150 |
-
author = {Kim, Su Nam and Medelyan, Olena and Kan, Min-Yen and Baldwin, Timothy},
|
151 |
-
title = {SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific Articles},
|
152 |
-
year = {2010},
|
153 |
-
publisher = {Association for Computational Linguistics},
|
154 |
-
address = {USA},
|
155 |
-
abstract = {This paper describes Task 5 of the Workshop on Semantic Evaluation 2010 (SemEval-2010). Systems are to automatically assign keyphrases or keywords to given scientific articles. The participating systems were evaluated by matching their extracted keyphrases against manually assigned ones. We present the overall ranking of the submitted systems and discuss our findings to suggest future directions for this task.},
|
156 |
-
booktitle = {Proceedings of the 5th International Workshop on Semantic Evaluation},
|
157 |
-
pages = {21–26},
|
158 |
-
numpages = {6},
|
159 |
-
location = {Los Angeles, California},
|
160 |
-
series = {SemEval '10}
|
161 |
-
}
|
162 |
-
```
|
163 |
-
|
164 |
-
## Contributions
|
165 |
-
Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
train.jsonl → extraction/semeval2010-test.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1d748faf67d9d4098d88311f3abb8d8cd7766f1cf3b2003f3276038d793633cd
|
3 |
+
size 1867271
|
test.jsonl → extraction/semeval2010-train.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5bd5f2b1805e85d82e422314b0105d41b7c0f3ef24d5b7d6eb330244f216bddc
|
3 |
+
size 2935362
|
generation/semeval2010-test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e959541a1bf70e5f5e3f6a45f1dc410235c2d2a900357b6dcb691f2379a63288
|
3 |
+
size 1858115
|
generation/semeval2010-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:650180ded673872dd98c3cd6bf68554eea38b83fcf8f1bcd90075f6c09a19c09
|
3 |
+
size 2920496
|
raw/semeval2010-test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:20b528da6c9630e3575d58282311e12c8c3239c2ea92bbb2fdf68706653614f5
|
3 |
+
size 1890903
|
raw/semeval2010-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e2d07cb86b4ef309f079c235695df0b7d3967bc22e3b4621781498da2575bc1d
|
3 |
+
size 2967832
|
semeval2010.py
DELETED
@@ -1,153 +0,0 @@
|
|
1 |
-
import json
|
2 |
-
import datasets
|
3 |
-
|
4 |
-
# _SPLIT = ['test']
|
5 |
-
_CITATION = """\
|
6 |
-
@inproceedings{10.5555/1859664.1859668,
|
7 |
-
author = {Kim, Su Nam and Medelyan, Olena and Kan, Min-Yen and Baldwin, Timothy},
|
8 |
-
title = {SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific Articles},
|
9 |
-
year = {2010},
|
10 |
-
publisher = {Association for Computational Linguistics},
|
11 |
-
address = {USA},
|
12 |
-
abstract = {This paper describes Task 5 of the Workshop on Semantic Evaluation 2010 (SemEval-2010). Systems are to automatically assign keyphrases or keywords to given scientific articles. The participating systems were evaluated by matching their extracted keyphrases against manually assigned ones. We present the overall ranking of the submitted systems and discuss our findings to suggest future directions for this task.},
|
13 |
-
booktitle = {Proceedings of the 5th International Workshop on Semantic Evaluation},
|
14 |
-
pages = {21–26},
|
15 |
-
numpages = {6},
|
16 |
-
location = {Los Angeles, California},
|
17 |
-
series = {SemEval '10}
|
18 |
-
}
|
19 |
-
|
20 |
-
"""
|
21 |
-
|
22 |
-
_DESCRIPTION = """\
|
23 |
-
|
24 |
-
"""
|
25 |
-
|
26 |
-
_HOMEPAGE = ""
|
27 |
-
|
28 |
-
# TODO: Add the licence for the dataset here if you can find it
|
29 |
-
_LICENSE = ""
|
30 |
-
|
31 |
-
# TODO: Add link to the official dataset URLs here
|
32 |
-
|
33 |
-
_URLS = {
|
34 |
-
"test": "test.jsonl",
|
35 |
-
"train": "train.jsonl"
|
36 |
-
}
|
37 |
-
|
38 |
-
|
39 |
-
# TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
|
40 |
-
class SemEval2010(datasets.GeneratorBasedBuilder):
|
41 |
-
"""TODO: Short description of my dataset."""
|
42 |
-
|
43 |
-
VERSION = datasets.Version("0.0.1")
|
44 |
-
|
45 |
-
BUILDER_CONFIGS = [
|
46 |
-
datasets.BuilderConfig(name="extraction", version=VERSION,
|
47 |
-
description="This part of my dataset covers extraction"),
|
48 |
-
datasets.BuilderConfig(name="generation", version=VERSION,
|
49 |
-
description="This part of my dataset covers generation"),
|
50 |
-
datasets.BuilderConfig(name="raw", version=VERSION, description="This part of my dataset covers the raw data"),
|
51 |
-
]
|
52 |
-
|
53 |
-
DEFAULT_CONFIG_NAME = "extraction"
|
54 |
-
|
55 |
-
def _info(self):
|
56 |
-
if self.config.name == "extraction": # This is the name of the configuration selected in BUILDER_CONFIGS above
|
57 |
-
features = datasets.Features(
|
58 |
-
{
|
59 |
-
"id": datasets.Value("string"),
|
60 |
-
"document": datasets.features.Sequence(datasets.Value("string")),
|
61 |
-
"doc_bio_tags": datasets.features.Sequence(datasets.Value("string"))
|
62 |
-
|
63 |
-
}
|
64 |
-
)
|
65 |
-
elif self.config.name == "generation":
|
66 |
-
features = datasets.Features(
|
67 |
-
{
|
68 |
-
"id": datasets.Value("string"),
|
69 |
-
"document": datasets.features.Sequence(datasets.Value("string")),
|
70 |
-
"extractive_keyphrases": datasets.features.Sequence(datasets.Value("string")),
|
71 |
-
"abstractive_keyphrases": datasets.features.Sequence(datasets.Value("string"))
|
72 |
-
|
73 |
-
}
|
74 |
-
)
|
75 |
-
else:
|
76 |
-
features = datasets.Features(
|
77 |
-
{
|
78 |
-
"id": datasets.Value("string"),
|
79 |
-
"document": datasets.features.Sequence(datasets.Value("string")),
|
80 |
-
"doc_bio_tags": datasets.features.Sequence(datasets.Value("string")),
|
81 |
-
"extractive_keyphrases": datasets.features.Sequence(datasets.Value("string")),
|
82 |
-
"abstractive_keyphrases": datasets.features.Sequence(datasets.Value("string")),
|
83 |
-
"other_metadata": datasets.features.Sequence(
|
84 |
-
{
|
85 |
-
"text": datasets.features.Sequence(datasets.Value("string")),
|
86 |
-
"bio_tags": datasets.features.Sequence(datasets.Value("string"))
|
87 |
-
}
|
88 |
-
)
|
89 |
-
|
90 |
-
}
|
91 |
-
)
|
92 |
-
return datasets.DatasetInfo(
|
93 |
-
# This is the description that will appear on the datasets page.
|
94 |
-
description=_DESCRIPTION,
|
95 |
-
# This defines the different columns of the dataset and their types
|
96 |
-
features=features,
|
97 |
-
homepage=_HOMEPAGE,
|
98 |
-
# License for the dataset if available
|
99 |
-
license=_LICENSE,
|
100 |
-
# Citation for the dataset
|
101 |
-
citation=_CITATION,
|
102 |
-
)
|
103 |
-
|
104 |
-
def _split_generators(self, dl_manager):
|
105 |
-
|
106 |
-
data_dir = dl_manager.download_and_extract(_URLS)
|
107 |
-
return [
|
108 |
-
datasets.SplitGenerator(
|
109 |
-
name=datasets.Split.TRAIN,
|
110 |
-
# These kwargs will be passed to _generate_examples
|
111 |
-
gen_kwargs={
|
112 |
-
"filepath": data_dir['train'],
|
113 |
-
"split": "train",
|
114 |
-
},
|
115 |
-
),
|
116 |
-
datasets.SplitGenerator(
|
117 |
-
name=datasets.Split.TEST,
|
118 |
-
# These kwargs will be passed to _generate_examples
|
119 |
-
gen_kwargs={
|
120 |
-
"filepath": data_dir['test'],
|
121 |
-
"split": "test"
|
122 |
-
},
|
123 |
-
),
|
124 |
-
]
|
125 |
-
|
126 |
-
# method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
|
127 |
-
def _generate_examples(self, filepath, split):
|
128 |
-
with open(filepath, encoding="utf-8") as f:
|
129 |
-
for key, row in enumerate(f):
|
130 |
-
data = json.loads(row)
|
131 |
-
if self.config.name == "extraction":
|
132 |
-
# Yields examples as (key, example) tuples
|
133 |
-
yield key, {
|
134 |
-
"id": data['paper_id'],
|
135 |
-
"document": data["document"],
|
136 |
-
"doc_bio_tags": data.get("doc_bio_tags")
|
137 |
-
}
|
138 |
-
elif self.config.name == "generation":
|
139 |
-
yield key, {
|
140 |
-
"id": data['paper_id'],
|
141 |
-
"document": data["document"],
|
142 |
-
"extractive_keyphrases": data.get("extractive_keyphrases"),
|
143 |
-
"abstractive_keyphrases": data.get("abstractive_keyphrases")
|
144 |
-
}
|
145 |
-
else:
|
146 |
-
yield key, {
|
147 |
-
"id": data['paper_id'],
|
148 |
-
"document": data["document"],
|
149 |
-
"doc_bio_tags": data.get("doc_bio_tags"),
|
150 |
-
"extractive_keyphrases": data.get("extractive_keyphrases"),
|
151 |
-
"abstractive_keyphrases": data.get("abstractive_keyphrases"),
|
152 |
-
"other_metadata": data.get("other_metadata")
|
153 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|