Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,30 @@ language:
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- 100K<n<1M
|
9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- 100K<n<1M
|
9 |
+
---
|
10 |
+
|
11 |
+
# Episode-Specific Spoilers
|
12 |
+
This is the spoiler matching dataset as presented in Spoiler Detection as Semantic Text Matching. It consists of comments discussing episodes from various TV shows. Unlike other spoiler datasets, this dataset assigns an episode number (and show name) to each comment, enabling matching to specific episodes and very fine-grain spoiler detection. This dataset also includes an episode summary for each (show, episode) pair. For a given show, the task is to rank the summaries for each comment.
|
13 |
+
|
14 |
+
# Usage
|
15 |
+
See the [Spoiler Matching repository](https://github.com/bobotran/spoiler-matching) for examples on how to train a spoiler matching model on this dataset.
|
16 |
+
|
17 |
+
# Annotation
|
18 |
+
522,991 comments from 13 TV shows were scraped from episode discussion threads on Reddit. Of these comments, some are actually *relevant* to their respective episode discussion and others are *irrelevant*. A subset of these comments (11,032) were hand-labeled as *irrelevant* or *relevant*. This hand-labeled dataset was used to train an irrelevant/relevant classifier which auto-labeled the remaining comments. All *relevant* comments were formatted into the `matching` dataset.
|
19 |
+
|
20 |
+
# Details
|
21 |
+
The `matching` folder contains the spoiler matching dataset, and the `filtering` folder contains intermediate data from the auto-labeling step.
|
22 |
+
|
23 |
+
## matching/
|
24 |
+
This folder contains the datasets for training spoiler matching models. All comments in these files were determined to be *relevant*, whether that was done by a human annotator or the auto-labeler. `summaries.json` contains the summary for each episode. `summaries.json` and `test.json` are the same across `with_autolabels` and `handlabeled_only`.
|
25 |
+
### matching/with_autolabels/
|
26 |
+
The `with_autolabels` folder contains the main dataset. `test.json` and `val.json` consist of hand-labeled *relevants* while `train.json` contains auto-labeled *relevants*. To measure the performance of spoiler matching models on unseen shows, `test.json` was constructed such that it consists of comments from 4 TV shows which are neither present in `val.json` nor `train.json`.
|
27 |
+
### matching/handlabeled_only/
|
28 |
+
The `handlabeled_only` folder shares the same `test.json` with `matching/with_autolabels/`, but `train.json` and `val.json` are split 80-20 from `matching/with_autolabels/val.json` respectively.
|
29 |
+
|
30 |
+
## filtering/
|
31 |
+
This folder contains data from the auto-labeling step.
|
32 |
+
### filtering/handlabeled
|
33 |
+
This folder contains the dataset used to train the autolabeler. Comments with a `1` in the first column were hand-labeled as `irrelevant`. Comments with a `0` in the first column were hand-labeled as `relevant`. The last two columns are the show name and episode number respectively, which are not used during this step.
|
34 |
+
### filtering/unlabeled
|
35 |
+
The unlabeled comments were split into two chunks to make them more manageable to load into memory during inference. All comments have a `-1` in the first column to represent that they are unlabeled.
|