Datasets:
metadata
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- summarization-other-paper-abstract-generation
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
This is a copy of the Multi-XScience dataset, except the input source documents of its test split have been replaced by a dense retriever. The retrieval pipeline used:
- query: The
related_workfield of each example - corpus: The union of all documents in the
train,validationandtestsplits - retriever:
facebook/contriever-msmarcovia PyTerrier with default settings - top-k strategy:
"oracle", i.e. the number of documents retrieved,k, is set as the original number of input documents for each example
Retrieval results on the train set:
| Recall@100 | Rprec | Precision@k | Recall@k |
|---|---|---|---|
| 0.5270 | 0.2005 | 0.2005 | 0.2005 |
Retrieval results on the validation set:
| Recall@100 | Rprec | Precision@k | Recall@k |
|---|---|---|---|
| 0.5310 | 0.2026 | 0.2026 | 0.2026 |
Retrieval results on the test set:
| Recall@100 | Rprec | Precision@k | Recall@k |
|---|---|---|---|
| 0.5229 | 0.2081 | 0.2081 | 0.2081 |