metadata
dataset_info:
features:
- name: sequence
dtype: large_string
splits:
- name: train
num_bytes: 45299669517.08662
num_examples: 207228723
- name: valid
num_bytes: 2185974.456691827
num_examples: 10000
- name: test
num_bytes: 2916145.0439189114
num_examples: 13340
download_size: 44647931388
dataset_size: 45304771636.587234
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
OMGProt50 with evaluation splits
Thanks Tatta Bio for putting together such an amazing dataset!
To create this version we removed IDs to save space and added the evaluations sets.
See here for a pretokenized version
We add validation and test sets for evalution purposes, including ESM2 speed runs. OMG prot50 was clusterd at 50% identity, so random splits are nonredundant to the training set by default. Random splits of 10,000 make up the base components of the validation and test sets. To the test set, we also add all new Uniprot entries since OMG creation that have transcript level evidence after dedpulication.