|
--- |
|
license: cc-by-sa-3.0 |
|
task_categories: |
|
- text-retrieval |
|
language: |
|
- en |
|
pretty_name: Wikpedia Paragraphs MPNet Embeddings |
|
--- |
|
|
|
Embeddings of the [english Wikipedia](https://huggingface.co/datasets/wikipedia) [paragraphs](https://huggingface.co/datasets/olmer/wiki_paragraphs) using [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) sentence transformers encoder. |
|
The dataset contains 43 911 155 paragraphs from 6 458 670 Wikipedia articles. |
|
The size of each paragraph varies from 20 to 2000 characters. |
|
For each paragraph there is an embedding of size 768. |
|
Embeddings are stored in numpy files, 1 000 000 embeddings per file. |
|
For each embedding file, there is an ids file that contains the list of ids of the corresponding paragraphs. |
|
__Be careful, dataset size is 151Gb__. |