Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1202,3 +1202,40 @@ configs:
|
|
| 1202 |
- split: train
|
| 1203 |
path: en-zh/train-*
|
| 1204 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1202 |
- split: train
|
| 1203 |
path: en-zh/train-*
|
| 1204 |
---
|
| 1205 |
+
|
| 1206 |
+
# Dataset Card for Parallel Sentences - CCMatrix
|
| 1207 |
+
|
| 1208 |
+
This dataset contains parallel sentences (i.e. English sentence + the same sentences in another language) for numerous other languages. The texts originate from the [CCMatrix](https://ai.meta.com/blog/ccmatrix-a-billion-scale-bitext-data-set-for-training-translation-models/) dataset.
|
| 1209 |
+
|
| 1210 |
+
## Related Datasets
|
| 1211 |
+
|
| 1212 |
+
The following datasets are also a part of the Parallel Sentences collection:
|
| 1213 |
+
* [parallel-sentences-europarl](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-europarl)
|
| 1214 |
+
* [parallel-sentences-global-voices](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-global-voices)
|
| 1215 |
+
* [parallel-sentences-muse](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-muse)
|
| 1216 |
+
* [parallel-sentences-jw300](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-jw300)
|
| 1217 |
+
* [parallel-sentences-news-commentary](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-news-commentary)
|
| 1218 |
+
* [parallel-sentences-opensubtitles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-opensubtitles)
|
| 1219 |
+
* [parallel-sentences-talks](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks)
|
| 1220 |
+
* [parallel-sentences-tatoeba](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-tatoeba)
|
| 1221 |
+
* [parallel-sentences-wikimatrix](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikimatrix)
|
| 1222 |
+
* [parallel-sentences-wikititles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikititles)
|
| 1223 |
+
* [parallel-sentences-ccmatrix](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-ccmatrix)
|
| 1224 |
+
|
| 1225 |
+
These datasets can be used to train multilingual sentence embedding models. For more information, see [sbert.net - Multilingual Models](https://www.sbert.net/examples/training/multilingual/README.html).
|
| 1226 |
+
|
| 1227 |
+
## Dataset Subsets
|
| 1228 |
+
|
| 1229 |
+
### `en-...` subsets
|
| 1230 |
+
|
| 1231 |
+
* Columns: "english", "non_english"
|
| 1232 |
+
* Column types: `str`, `str`
|
| 1233 |
+
* Examples:
|
| 1234 |
+
```python
|
| 1235 |
+
{
|
| 1236 |
+
"english": "",
|
| 1237 |
+
"non_english": "",
|
| 1238 |
+
}
|
| 1239 |
+
```
|
| 1240 |
+
* Collection strategy: Processing the data from [yhavinga/ccmatrix](https://huggingface.co/datasets/yhavinga/ccmatrix) and reformatting it in Parquet and with "english" and "non_english" columns.
|
| 1241 |
+
* Deduplified: No
|