dataset_info: | |
features: | |
- name: input_ids | |
sequence: int32 | |
- name: token_type_ids | |
sequence: int8 | |
- name: attention_mask | |
sequence: int8 | |
- name: labels | |
sequence: int64 | |
splits: | |
- name: train | |
num_bytes: 493127488 | |
num_examples: 119924 | |
- name: validation | |
num_bytes: 27274896 | |
num_examples: 6633 | |
- name: test | |
num_bytes: 27377696 | |
num_examples: 6658 | |
download_size: 153946164 | |
dataset_size: 547780080 | |
# Dataset Card for "pubmed_long_tokenised" | |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |