Datasets:
Tasks:
Sentence Similarity
Modalities:
Text
Formats:
json
Sub-tasks:
semantic-similarity-classification
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Commit
·
d8c38cd
1
Parent(s):
2cb43bd
Update README.md
Browse files
README.md
CHANGED
|
@@ -60,17 +60,52 @@ COCO is a large-scale object detection, segmentation, and captioning dataset. CO
|
|
| 60 |
Disclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card.
|
| 61 |
These steps were done by the Hugging Face team.
|
| 62 |
|
| 63 |
-
### Supported Tasks
|
| 64 |
|
| 65 |
-
[
|
| 66 |
|
| 67 |
### Languages
|
| 68 |
|
| 69 |
-
|
| 70 |
|
| 71 |
## Dataset Structure
|
| 72 |
|
| 73 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
### Data Instances
|
| 76 |
|
|
|
|
| 60 |
Disclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card.
|
| 61 |
These steps were done by the Hugging Face team.
|
| 62 |
|
| 63 |
+
### Supported Tasks
|
| 64 |
|
| 65 |
+
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
|
| 66 |
|
| 67 |
### Languages
|
| 68 |
|
| 69 |
+
- English.
|
| 70 |
|
| 71 |
## Dataset Structure
|
| 72 |
|
| 73 |
+
Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value":
|
| 74 |
+
|
| 75 |
+
```
|
| 76 |
+
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
|
| 77 |
+
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
|
| 78 |
+
...
|
| 79 |
+
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
|
| 83 |
+
|
| 84 |
+
### Usage Example
|
| 85 |
+
|
| 86 |
+
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
|
| 87 |
+
|
| 88 |
+
```python
|
| 89 |
+
from datasets import load_dataset
|
| 90 |
+
dataset = load_dataset("embedding-data/coco_captions")
|
| 91 |
+
```
|
| 92 |
+
The dataset is loaded as a `DatasetDict` and has the format:
|
| 93 |
+
|
| 94 |
+
```python
|
| 95 |
+
DatasetDict({
|
| 96 |
+
train: Dataset({
|
| 97 |
+
features: ['set'],
|
| 98 |
+
num_rows: 82783
|
| 99 |
+
})
|
| 100 |
+
})
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
Review an example `i` with:
|
| 104 |
+
|
| 105 |
+
```python
|
| 106 |
+
dataset["train"][i]["set"]
|
| 107 |
+
```
|
| 108 |
+
|
| 109 |
|
| 110 |
### Data Instances
|
| 111 |
|