gasmichel commited on
Commit
7498d93
·
verified ·
1 Parent(s): 868965a

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. .ipynb_checkpoints/README-checkpoint.md +125 -0
  2. README.md +74 -1
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - apache-2.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ pretty_name: DramaCV
17
+ dataset_info:
18
+ - config_name: scene
19
+ splits:
20
+ - name: train
21
+ num_examples: 1507
22
+ - name: validation
23
+ num_examples: 1557
24
+ - name: test
25
+ num_examples: 1319
26
+ - config_name: play
27
+ splits:
28
+ - name: train
29
+ num_examples: 226
30
+ - name: validation
31
+ num_examples: 917
32
+ - name: test
33
+ num_examples: 1214
34
+ configs:
35
+ - config_name: scene
36
+ data_files:
37
+ - split: train
38
+ path: scene/train.json
39
+ - split: validation
40
+ path: scene/validation.json
41
+ - split: test
42
+ path: scene/test.json
43
+ - config_name: play
44
+ data_files:
45
+ - split: train
46
+ path: play/train.json
47
+ - split: validation
48
+ path: play/validation.json
49
+ - split: test
50
+ path: play/test.json
51
+ ---
52
+ # Dataset Card for DramaCV
53
+
54
+ ## Dataset Summary
55
+
56
+ The DramaCV Dataset is an English-language dataset containing utterances of fictional characters in drama plays collected from Project Gutenberg. The dataset was automatically created by parsing 499 drama plays from the 15th to 20th century on Project Gutenberg, that are then parsed to attribute each character line to its speaker.
57
+
58
+ ## Task
59
+
60
+ This dataset was developed for Authorship Verification of literary characters. Each data instance contains lines from a characters, which we desire to distinguish from different lines uttered by other characters.
61
+
62
+ ## Subsets
63
+
64
+ This dataset supports two subsets:
65
+
66
+ - **Scene**: We split each play in scenes, a small segment unit of drama that is supposed to contain actions occurring at a specific time and place with the same characters. If a play has no `<scene>` tag, we instead split it in acts, with the `<act>` tag. Acts are larger segment units, composed of multiple scenes. For this split, we only consider plays that have at least one of these tags. A total of **169** plays were parsed for this subset.
67
+ - **Play**: We do not segment play and use all character lines in a play. Compared to the scene segment, the number of candidate characters is higher, and discussions could include various topics. A total of **287** plays were parsed for this subset.
68
+
69
+ ## Dataset Statistics
70
+
71
+ We randomly sample each subset in 80/10/10 splits for train, validation and test.
72
+
73
+ | | Split | Segments | Utterances | Queries | Targets/Query (avg) |
74
+ |-------|-------|----------|------------|---------|---------------------|
75
+ | | Train | 1507 | 263270 | 5392 | 5.0 |
76
+ | **Scene** | Val | 240 | 50670 | 1557 | 8.8 |
77
+ | | Test | 203 | 41830 | 1319 | 8.7 |
78
+ | | Train | 226 | 449407 | 4109 | 90.7 |
79
+ | **Play** | Val | 30 | 63934 | 917 | 55.1 |
80
+ | | Test | 31 | 74738 | 1214 | 108.5 |
81
+
82
+
83
+ # Usage
84
+
85
+ ## Loading the dataset
86
+
87
+ ```python
88
+ from datasets import load_dataset
89
+
90
+ # Loads the scene split
91
+ scene_data = load_dataset("gasmichel/DramaCV", "scene")
92
+ print(scene_data)
93
+
94
+ # DatasetDict({
95
+ # train: Dataset({
96
+ # features: ['query', 'true_target', 'play_index', 'act_index'],
97
+ # num_rows: 1507
98
+ # })
99
+ # validation: Dataset({
100
+ # features: ['query', 'true_target', 'play_index', 'act_index'],
101
+ # num_rows: 1557
102
+ # })
103
+ # test: Dataset({
104
+ # features: ['query', 'true_target', 'play_index', 'act_index'],
105
+ # num_rows: 1319
106
+ # })
107
+ #})
108
+
109
+
110
+ # Loads the play split
111
+ play_data = load_dataset("gasmichel/DramaCV/", "play")
112
+ ```
113
+
114
+ ## Train vs Val/Test
115
+
116
+ The train splits contain only *queries* which are collections of utterances spoken by the same character in a segmentation unit (a *scene* for the *scene* split, or the *full play* for the *play* split).
117
+
118
+ The validation and test data contain both *queries* and *targets*:
119
+
120
+ - *Queries* contain half of the utterances of a character, randomly sampled in the same segmentation unit
121
+ - *Targets* contain the other half of these utterances.
122
+
123
+ ## Act and Play Index
124
+ Each collection of utterances is assigned a specific `act_index` and `play_index`, spcecifying the act/scene and play it was taken from respectively.
125
+ DramaCV can be used to train Authorship Verification models by restricting the training data to come from the same `act_index` and `play_index`. In other words, an Authorship Verifcation model can be trained by distinguishing utterances of characters in the same `play` or `scene`.
README.md CHANGED
@@ -49,4 +49,77 @@ configs:
49
  - split: test
50
  path: play/test.json
51
  ---
52
- # Dataset Card for DramaCV
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  - split: test
50
  path: play/test.json
51
  ---
52
+ # Dataset Card for DramaCV
53
+
54
+ ## Dataset Summary
55
+
56
+ The DramaCV Dataset is an English-language dataset containing utterances of fictional characters in drama plays collected from Project Gutenberg. The dataset was automatically created by parsing 499 drama plays from the 15th to 20th century on Project Gutenberg, that are then parsed to attribute each character line to its speaker.
57
+
58
+ ## Task
59
+
60
+ This dataset was developed for Authorship Verification of literary characters. Each data instance contains lines from a characters, which we desire to distinguish from different lines uttered by other characters.
61
+
62
+ ## Subsets
63
+
64
+ This dataset supports two subsets:
65
+
66
+ - **Scene**: We split each play in scenes, a small segment unit of drama that is supposed to contain actions occurring at a specific time and place with the same characters. If a play has no `<scene>` tag, we instead split it in acts, with the `<act>` tag. Acts are larger segment units, composed of multiple scenes. For this split, we only consider plays that have at least one of these tags. A total of **169** plays were parsed for this subset.
67
+ - **Play**: We do not segment play and use all character lines in a play. Compared to the scene segment, the number of candidate characters is higher, and discussions could include various topics. A total of **287** plays were parsed for this subset.
68
+
69
+ ## Dataset Statistics
70
+
71
+ We randomly sample each subset in 80/10/10 splits for train, validation and test.
72
+
73
+ | | Split | Segments | Utterances | Queries | Targets/Query (avg) |
74
+ |-------|-------|----------|------------|---------|---------------------|
75
+ | | Train | 1507 | 263270 | 5392 | 5.0 |
76
+ | **Scene** | Val | 240 | 50670 | 1557 | 8.8 |
77
+ | | Test | 203 | 41830 | 1319 | 8.7 |
78
+ | | Train | 226 | 449407 | 4109 | 90.7 |
79
+ | **Play** | Val | 30 | 63934 | 917 | 55.1 |
80
+ | | Test | 31 | 74738 | 1214 | 108.5 |
81
+
82
+
83
+ # Usage
84
+
85
+ ## Loading the dataset
86
+
87
+ ```python
88
+ from datasets import load_dataset
89
+
90
+ # Loads the scene split
91
+ scene_data = load_dataset("gasmichel/DramaCV", "scene")
92
+ print(scene_data)
93
+
94
+ # DatasetDict({
95
+ # train: Dataset({
96
+ # features: ['query', 'true_target', 'play_index', 'act_index'],
97
+ # num_rows: 1507
98
+ # })
99
+ # validation: Dataset({
100
+ # features: ['query', 'true_target', 'play_index', 'act_index'],
101
+ # num_rows: 1557
102
+ # })
103
+ # test: Dataset({
104
+ # features: ['query', 'true_target', 'play_index', 'act_index'],
105
+ # num_rows: 1319
106
+ # })
107
+ #})
108
+
109
+
110
+ # Loads the play split
111
+ play_data = load_dataset("gasmichel/DramaCV/", "play")
112
+ ```
113
+
114
+ ## Train vs Val/Test
115
+
116
+ The train splits contain only *queries* which are collections of utterances spoken by the same character in a segmentation unit (a *scene* for the *scene* split, or the *full play* for the *play* split).
117
+
118
+ The validation and test data contain both *queries* and *targets*:
119
+
120
+ - *Queries* contain half of the utterances of a character, randomly sampled in the same segmentation unit
121
+ - *Targets* contain the other half of these utterances.
122
+
123
+ ## Act and Play Index
124
+ Each collection of utterances is assigned a specific `act_index` and `play_index`, spcecifying the act/scene and play it was taken from respectively.
125
+ DramaCV can be used to train Authorship Verification models by restricting the training data to come from the same `act_index` and `play_index`. In other words, an Authorship Verifcation model can be trained by distinguishing utterances of characters in the same `play` or `scene`.