File size: 10,511 Bytes
5770b6d
 
 
 
 
 
 
2e4d51a
 
 
5770b6d
 
36cee0e
 
5625ce8
5770b6d
3de864c
 
 
 
 
 
 
 
 
 
df9ff6a
3de864c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5625ce8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
---
language:
- en
- es
- fr
- pt
- de
multilinguality:
- multilingual
viewer: false
---

> [!NOTE]
> Dataset origin: https://zenodo.org/records/5034605
 

# `MultiSubs`: A Large-scale Multimodal and Multilingual Dataset


## Introduction 

`MultiSubs` is a dataset of multilingual subtitles gathered from [the OPUS OpenSubtitles dataset](https://opus.nlpl.eu/OpenSubtitles.php), which in turn was sourced from [opensubtitles.org](http://www.opensubtitles.org/). 
We have supplemented some text fragments (visually salient nouns for now) of the subtitles with web images. 

Please refer to the paper below for a more detailed description of the dataset.

⚠ The images are available to download as a separate [zip file](https://huggingface.co/datasets/FrancophonIA/MultiSubs/blob/main/images.zip).

## Disclaimer

The MultiSubs dataset is provided as is. 
As the dataset relies on external sources such as movie subtitles and BabelNet, the corpus itself may contain offensive langauge and images, such as curse words and images depicting nudity or sexual organs, among others. 
While these are kept to a minimal, they inevitably exist.


## Content

### `monolingual/`

Contains the original English dataset as described in paragraph 1 of Section 3 in the paper.


##### `en.sents.tok` and `en.sents.det`
Tokenised and detokenised versions of English sentences, each containing at least one occurrence of a fragment from `en.fragments` below.

Format: `sentence-id \t sentence` where `sentence-id` is `imdb-id#sentence-number`. 

There are 10,241,617 English sentences (paragraph 1 of Section 3 in the paper).


##### `en.fragments`

List of fragments extracted from English sentences that are (1) single word nouns; (2) have an imageability score of at least 500; (3) occur more than once. 
There are 13,268,536 fragment instances and 4,099 unique tokens. See paragraph 1 of Section 3 in the paper.

Format:
`sentence-id \t fragment-start \t fragment-end \t fragment-type \t raw-fragment \t tokenised-fragment \t POS-sequence`

e.g. `417#1 \t 2 \t 5 \t N \t Trip \t trip \t  NN`

- `fragment-start` and `fragment-end` are the start/end character indices for the fragment in `en.sents.det`.
- `fragment-type` is always `N` (single word nouns) for this release.
- `POS-sequence` are the PoS tag sequence for the fragment (a single tag for single word nouns). 



### `multilingual/`

##### `en-[es|pt_br|fr|de].sents.json`

Aligned bilingual corpus of Spanish [es], (Brazilian) Portugese [pt_br], French [fr] and German [de] sentences respectively.

The JSON object is a dictionary, where the key is the ID of the aligned sentence, and the value is a dictionary in the following format:

```json
{
    "src": { 
        "det": "A detokenised sentence in the source language (English).",
        "tok": "a tokenised sentence in the source language ( english ) ."
    },
    
    "trg": {
        "det": "The aligned sentence in the target language, detokenised.",
        "tok": "the aligned sentence in the target language , detokenised ."
    }
}
```


##### `en-[es|pt_br|fr|de].fragments.json`

The fragments extracted from the sentences above.

The JSON object is a list, where each item in the list contains fragment in the following format:

```json
{  "sentId": "6617#en{7}:de{14}",
   "srcSentId": "6617#7",
   "srcCharStart": 4,
   "srcCharEnd": 6,
   "srcFragment": "man",
   "trgFragment": "mann",
   "trgTokenIndex": 2,
   "trgFragmentList": ["mann", "mensch", "männer"],
   "synsets": "bn:00001533n;bn:00044576n;bn:00053096n;bn:00053097n;bn:00053099n;bn:03478581n"
}
```

- `srcCharStart` and `srcCharEnd` are the positions of the first and last character of the fragment in the *detokenised* English sentence respectively (starts from 0).

- `trgTokenIndex` is the position of the token in the *tokenised* sentence in the target language (starts from 0).

- `trgFragmentList` is a list of plausible (sense-disambiguated) translations for this fragment in the target language

- `synsets` are the inferred babelnet synset ID(s) for the fragment. Multiple synsets are separated by semicolons. This semicolon-separated-string can be used as the ID to query all images for this fragment in `images.json` below.


##### `images.json`

A JSON dictionary. 

Key: Babelnet synset IDs separated by semicolons (see above). 
Value: List of image IDs associated with this set of synset IDs.


##### `en-[es|pt_br|fr|de].intersect.json`

A JSON dictionary. Gives the ids of sentences in each *intersect_N* subset, as described in Section 3.1 of the paper.

Key: `"intersect1"`, `"intersect2"`, `"intersect3"`, `"intersect4"`

Value: List of sentence ids for each subset. The number of sentences should correspond to those reported in Table 1 of the paper.


### `human_eval/`

Results of Human Evaluation of the "Gap Filling Game" (Section 5 of paper)

##### `results.json`

Format:

```json
{
  challengeId: {
    "subtitleId": id,
    "userId": userId,
    "consensus": "intersectN",
    "word": correctWord,
    "correctAttempt": howManyAttempts (0 if failed after 3 attempts)
    "guess1": {"word": word, "score": score} ,
    "guess2": {"word": word, "score": score},
    "guess3": {"word": word, "score": score}
  }
}
```

- `"guess2"` and `"guess3"` may be absent depending on `"correctAttempt"`.


##### `results_detailed.json`

This version has more details, including the sentences and image(s) shown to the user. There are 16 instances missing in this version compared to `results.json` - we have lost some of the information of these.

Additional fields:
- `"images": ["IMAGE1", "IMAGE2", "IMAGE3", ...]`
- `"leftContext": "The left portion of the sentence before the missing word"`
- `"rightContext": "The right portion of the sentence after the missing word"`

The first image in the `"images"` list is shown in Attempt 2 (one image).


### `tasks/fill_in_the_blank/`

Dataset for the fill-in-the-blank task (Section 6.1)

##### `sents.json`

A JSON list. Each item in the list is a dictionary representing a sentence for the fill-in-the-blank task.

There are 4383978 sentences in this file, although not all are used (only 4377772 are used)

Blanks are marked as `<_>` in the sentences.

Format for each item:

```json
{"sentId": "417#en{2}:pt_br{2}", 
 "word": "hall", 
 "wordLower": "hall",
 "sent": {
    "det": "The astronomers are assembled in a large <_> embellished with instruments.", 
    "tok": "the astronomers are assembled in a large <_> embellished with instruments ."
  }, 
  "synsets": "bn:00004493n;bn:00042664n", 
  "intersect": "intersect=1", 
  "imageId": "4E6B2547DF16BB40DB0036159E1CBF0BA12127752D3C447E7CE8BFB3", 
  "srcSentId": "417#2", 
  "srcCharStart": 41, 
  "srcCharEnd": 44
}
```

- `wordLower` is the lowercased version of the token.
- `intersect` gives the specific subset this sentence belongs to. You can retrieve a list of sentences belonging to a subset using `intersect.json` instead (below).
- `imageId` is the image randomly selected for the sentence, but keeping the training, test and validation images disjoint. This is described at the end of Section 6.1.2 in the paper. You may use this if you are training/testing a model that takes in a single image as input to ensure your experiments comparable. `imageId` may be `null` if not used in the split.



##### `intersect.json`

A JSON dictionary, containing the list of sentences each `intersect{=N}` subset contain.

Keys: `"intersect=1"`, `"intersect=2"`, `"intersect=3"`, `"intersect=4"`

Value: List of indices, pointing to the sentences in `sents.json`


##### `splits.json`

A JSON dictionary, containing the different train/test splits.

**Keys**:
Main splits:

- `"train"` - all training (4277772 instances)
- `"val"` - all validation (5000 instances)
- `"test"` - all test (5000 instances)

Training subsets (first paragraph of Section 6.1.4 in paper)
- `"trainIntersect=1"` - training subset where intersect=1 (2499265)
- `"trainIntersect=2"` - training subset where intersect=2 (1252886)
- `"trainIntersect=3"` - training subset where intersect=3 (462860)
- `"trainIntersect=4"` - training subset where intersect=4 (62761)

Validation and test subsets (second paragraph of Section 6.1.4 in paper). The test results reported in the paper are based on this subset.
- `"valSubset"` - subset of validation (3143)
- `"testSubset"` - subset of test set (3262)


**Values**: List of indices, pointing to the sentences in `sents.json` for each split.


### `tasks/lexical_translation/`

Dataset for the lexical translation task (Section 6.2).

##### `en-[es|pt_br|fr|de].sents.json`

Same as the fill-in-the-blank task above, with two additional keys: 
- `"target"` for the exact word in the target language.
- `"positiveTargets"` for a list of acceptable words in the target language.


##### `en-[es|pt_br|fr|de].intersect.json`

Same as the fill-in-the-blank task above.


##### `en-[es|pt_br|fr|de].splits.json`

Same as the fill-in-the-blank task above.

Number of instances:

`es`
- `train`: 2356787
- `val`: 5000
- `test`: 5000
- `valSubset`: 3172
- `testSubset`: 3117

`pt_br`
- `train`: 1950455
- `val`: 5000
- `test`: 5000
- `valSubset`: 3084
- `testSubset`: 3167

`fr`
- `train`: 1143608
- `val`: 5000
- `test`: 5000
- `valSubset`: 2930
- `testSubset`: 2944

`de`
- `train`: 405759
- `val`: 5000
- `test`: 5000
- `valSubset`: 3047
- `testSubset`: 3007



## Citation

I hope that you do something useful and impactful with this dataset to really move the field forward, and not just publish papers for the sake of it.

Please cite the following paper if you use this dataset in your work:

Josiah Wang, Pranava Madhyastha, Josiel Figueiredo, Chiraag Lala, Lucia Specia (2021). MultiSubs: A Large-scale Multimodal and Multilingual Dataset. CoRR, abs/2103.01910. Available at: https://arxiv.org/abs/2103.01910

```bibtex
@article{DBLP:journals/corr/abs-2103-01910,
  author    = {Josiah Wang and
               Pranava Madhyastha and
               Josiel Figueiredo and
               Chiraag Lala and
               Lucia Specia},
  title     = {MultiSubs: {A} Large-scale Multimodal and Multilingual Dataset},
  journal   = {CoRR},
  volume    = {abs/2103.01910},
  year      = {2021},
  url       = {https://arxiv.org/abs/2103.01910},
  archivePrefix = {arXiv},
  eprint    = {2103.01910},
  timestamp = {Thu, 04 Mar 2021 17:00:40 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2103-01910.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
```