File size: 5,446 Bytes
42e70d8
 
c455f3c
 
 
 
 
42e70d8
c455f3c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
---
license: mit
language:
- en
paperswithcode_id: embedding-data/QQP_triplets
pretty_name: QQP_triplets

---

# Dataset Card for "QQP_triplets"

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)
  
## Dataset Description

- **Homepage:** [https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- **Repository:** [More Information Needed](http://qim.fs.quoracdn.net/quora_duplicate_questions.tsv)
- **Paper:** [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- **Point of Contact:** [Kornél Csernai](https://www.quora.com/profile/Korn%C3%A9l-Csernai), [Nikhil Dandekar](https://www.quora.com/profile/Nikhil-Dandekar), [Shankar Iyer](https://www.quora.com/profile/Shankar-Iyer-5)

### Dataset Summary

This dataset will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data.

The dataset consists of over 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair.

Disclaimer: The team releasing Quora data did not upload the dataset to the Hub and did not write a dataset card. 
These steps were done by the Hugging Face team.

### Supported Tasks and Leaderboards

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

### Languages

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

## Dataset Structure

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

### Curation Rationale

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

### Source Data

#### Initial Data Collection and Normalization

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

#### Who are the source language producers?

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

### Annotations

#### Annotation process

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

#### Who are the annotators?

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

### Personal and Sensitive Information

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

### Discussion of Biases

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

### Other Known Limitations

Here are a few important things to keep in mind about this dataset:

- Our original sampling method returned an imbalanced dataset with many more true examples of duplicate pairs than non-duplicates. 
Therefore, we supplemented the dataset with negative examples. 
- One source of negative examples were pairs of “related questions” which, although pertaining to similar topics, 
are not truly semantically equivalent.
- The distribution of questions in the dataset should not be taken to be representative of the distribution of questions asked on Quora. This is, in part, because of the combination of sampling procedures and also due to some sanitization measures that
have been applied to the final dataset (e.g., removal of questions with extremely long question details).
- The ground-truth labels contain some amount of noise: they are not guaranteed to be perfect.

## Additional Information

### Dataset Curators

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

### Licensing Information

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

### Citation Information

[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)

### Contributions

Thanks to [Kornél Csernai](https://www.quora.com/profile/Korn%C3%A9l-Csernai), [Nikhil Dandekar](https://www.quora.com/profile/Nikhil-Dandekar), [Shankar Iyer](https://www.quora.com/profile/Shankar-Iyer-5) for adding this dataset.