Datasets:
Tasks:
Sentence Similarity
Modalities:
Text
Formats:
json
Sub-tasks:
semantic-similarity-classification
Languages:
English
Size:
100K - 1M
License:
metadata
license: mit
language:
- en
paperswithcode_id: embedding-data/QQP_triplets
pretty_name: QQP_triplets
Dataset Card for "QQP_triplets"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: Kornél Csernai, Nikhil Dandekar, Shankar Iyer
Dataset Summary
This dataset will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data.
The dataset consists of over 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair.
Disclaimer: The team releasing Quora data did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Here are a few important things to keep in mind about this dataset:
- Our original sampling method returned an imbalanced dataset with many more true examples of duplicate pairs than non-duplicates. Therefore, we supplemented the dataset with negative examples.
- One source of negative examples were pairs of “related questions” which, although pertaining to similar topics, are not truly semantically equivalent.
- The distribution of questions in the dataset should not be taken to be representative of the distribution of questions asked on Quora. This is, in part, because of the combination of sampling procedures and also due to some sanitization measures that have been applied to the final dataset (e.g., removal of questions with extremely long question details).
- The ground-truth labels contain some amount of noise: they are not guaranteed to be perfect.
Additional Information
Dataset Curators
Licensing Information
Citation Information
Contributions
Thanks to Kornél Csernai, Nikhil Dandekar, Shankar Iyer for adding this dataset.