shaffei commited on
Commit
163cef7
·
verified ·
1 Parent(s): 1144112

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - protein-protein interaction
4
+ - binding affinity
5
+ - protein language models
6
+ - drug discovery
7
+ - bioinformatics
8
+ - multi-chain proteins
9
+ - ppb affinity
10
+ license: cc-by-nc-sa-4.0
11
+ pipeline_tag: regression
12
+ task_categories:
13
+ - text-classification
14
+ ---
15
+
16
+ ## Dataset Description
17
+
18
+ This repository provides several enhanced versions of the **PPB-Affinity dataset**, ready for both sequence and structure-based modeling of multi-chain protein-protein interactions. The original PPB-Affinity dataset was introduced in the paper "[PPB-Affinity: Protein-Protein Binding Affinity dataset for AI-based protein drug discovery](https://www.nature.com/articles/s41597-024-03997-4)".
19
+
20
+ This version of the dataset was prepared for the study: "*Beyond Simple Concatenation: Fairly Assessing PLM Architectures for Multi-Chain Protein-Protein Interactions Prediction*."
21
+
22
+ The primary enhancements in this repository include:
23
+ * Various levels of data filtration and processing.
24
+ * The addition of pre-extracted "Ligand Sequences" and "Receptor Sequences" columns, making the dataset ready for use with sequence-based models without requiring PDB file parsing. For complexes with multiple ligand or receptor chains, the sequences are comma-separated.
25
+
26
+ ## Dataset Configurations
27
+
28
+ This dataset offers four distinct configurations:
29
+
30
+ ### 1. `raw`
31
+ * **Description**: Minimally processed data from the original PPB-Affinity dataset. Only annotation inconsistencies have been resolved (see Section 2.1.1 of "*Beyond Simple Concatenation...*" for details).
32
+ * **Size**: 12,048 entries.
33
+ * **Splits**: Contains a single `train` split encompassing all entries.
34
+ * **How to load**:
35
+ ```python
36
+ from datasets import load_dataset
37
+
38
+ raw_ds = load_dataset(
39
+ "proteinea/ppb_affinity",
40
+ name="raw",
41
+ trust_remote_code=True
42
+ )["train"]
43
+ ```
44
+
45
+ ### 2. `raw_rec`
46
+ * **Description**: Similar to the `raw` version, but with an additional step to recover missing residues in the protein sequences (see Section 2.1.2 of "*Beyond Simple Concatenation...*" for details).
47
+ * **Size**: 12,048 entries.
48
+ * **Splits**: Contains a single `train` split.
49
+ * **How to load**:
50
+ ```python
51
+ from datasets import load_dataset
52
+
53
+ raw_rec_ds = load_dataset(
54
+ "proteinea/ppb_affinity",
55
+ name="raw_rec",
56
+ trust_remote_code=True
57
+ )["train"]
58
+ ```
59
+
60
+ ### 3. `filtered`
61
+ * **Description**: This version includes additional cleaning and filtration steps applied to the raw with missing residues recovered data (see Section 2.1.2 of "*Beyond Simple Concatenation...*" for details on filtration). It comes with pre-defined train, validation, and test splits (see Section 2.1.3 of "*Beyond Simple Concatenation...*" for splitting methodology).
62
+ * **Size**:
63
+ * Train: 6,485 entries
64
+ * Validation: 965 entries
65
+ * Test: 757 entries
66
+ * **Splits**: `train`, `validation`, `test`.
67
+ * **How to load**:
68
+ ```python
69
+ from datasets import load_dataset
70
+
71
+ dataset_dict = load_dataset(
72
+ "proteinea/ppb_affinity",
73
+ name="filtered",
74
+ trust_remote_code=True
75
+ )
76
+ train_ds = dataset_dict["train"]
77
+ val_ds = dataset_dict["validation"]
78
+ test_ds = dataset_dict["test"]
79
+ ```
80
+
81
+ ### 4. `filtered_random`
82
+ * **Description**: This version uses the same cleaned and filtered entries as the `filtered` configuration but provides random 80%-10%-10% splits for train, validation, and test, respectively. The shuffling is performed with a fixed seed (42) for reproducibility.
83
+ * **Size**: Same total entries as `filtered`, split as:
84
+ * Train: 6,565 entries
85
+ * Validation: 820 entries
86
+ * Test: 822 entries
87
+ * **Splits**: `train`, `validation`, `test`.
88
+ * **How to load**:
89
+ ```python
90
+ from datasets import load_dataset
91
+
92
+ dataset_dict = load_dataset(
93
+ "proteinea/ppb_affinity",
94
+ name="filtered_random",
95
+ trust_remote_code=True
96
+ )
97
+ train_ds = dataset_dict["train"]
98
+ val_ds = dataset_dict["validation"]
99
+ test_ds = dataset_dict["test"]
100
+ ```
101
+
102
+ ## Data Fields
103
+
104
+ All configurations share a common set of columns. These include columns from the original PPB-Affinity dataset (refer to the original paper for more details), plus two new sequence columns:
105
+
106
+ * **`Ligand Sequences`**: `string` - Comma-separated amino acid sequences of the ligand chain(s).
107
+ * **`Receptor Sequences`**: `string` - Comma-separated amino acid sequences of the receptor chain(s).
108
+
109
+ **Note on Sequences**: When multiple ligand or receptor chains are present in a complex, their respective amino acid sequences are concatenated with a comma (`,`) as a separator in the "Ligand Sequences" and "Receptor Sequences" fields.
110
+
111
+ ---