Hani Park commited on
Commit
3de05e3
·
1 Parent(s): d3f6b75

Modified README and data

Browse files
Files changed (5) hide show
  1. .gitattributes +2 -0
  2. README.md +56 -9
  3. data/test.csv +2 -2
  4. data/train.csv +2 -2
  5. data/validation.csv +2 -2
.gitattributes CHANGED
@@ -62,3 +62,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
62
  *.tsv filter=lfs diff=lfs merge=lfs -text
63
  *.pdb filter=lfs diff=lfs merge=lfs -text
64
  *.cif filter=lfs diff=lfs merge=lfs -text
 
 
 
62
  *.tsv filter=lfs diff=lfs merge=lfs -text
63
  *.pdb filter=lfs diff=lfs merge=lfs -text
64
  *.cif filter=lfs diff=lfs merge=lfs -text
65
+ .csv filter=lfs diff=lfs merge=lfs -text
66
+ .pdb filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -9,9 +9,10 @@ tags:
9
  - pdb
10
  - rosettacommons
11
  pretty_name: Antibody dataset
12
- repo:
13
  dataset_summary: >-
14
- citation_bibtex: |-
 
15
  @article{Huang2025,
16
  title = {SAAINT-DB: a comprehensive structural antibody database for antibody modeling and design},
17
  volume = {46},
@@ -29,12 +30,16 @@ citation_bibtex: |-
29
  ---
30
 
31
  # SAAINTDB
32
-
 
 
 
33
 
34
  ## Dataset Splits
35
  The dataset was split at the PDB level into train, validation, and test sets (70/15/15).
36
  To maintain balanced distributions, the split was stratified based on the HL label (heavy/light chain availability).
37
  To prevent data leakage, all entries originating from the same PDB ID were assigned to the same split.
 
38
 
39
  The resulting splits are provided as CSV files in the `data/` directory.
40
  The corresponding PDB structures for each split are also provided in the `PDB/` directory.
@@ -45,8 +50,10 @@ The corresponding PDB structures for each split are also provided in the `PDB/`
45
 
46
 
47
  ## Dataset Processing
 
 
48
  The following preprocessing steps were performed to construct the dataset:
49
- 1. Added a `PDB_ID_chain` column to serve as a unique identifier for each antibody entry (PDB ID + chain)
50
 
51
  2. Added an `hl_label` column indicating chain availability:
52
  - `HL`: both heavy and light chains present
@@ -55,12 +62,11 @@ The following preprocessing steps were performed to construct the dataset:
55
  This label was later used for balanced dataset splitting.
56
 
57
  3. Some PDB entries referenced in the dataset were missing structure files.
58
- We identified the missing entries and downloaded 111 mmCIF files from the RCSB Protein Data Bank (PDB),
59
- updating the dataset to reflect the available structures as of February 2026.
60
 
61
  4. FASTA files corresponding to the downloaded CIF structures were missing and were subsequently generated/added.
62
 
63
- 5. The dataset was split into **train, validation, and test sets (70/15/15)**.
64
 
65
  6. A `split` column was added to the dataset to indicate the assigned subset (`train`, `validation`, or `test`).
66
 
@@ -84,10 +90,51 @@ then, from within python load the datasets library
84
  >>> import datasets
85
 
86
 
87
- ### Load dataset
88
- Load the 'RosettaCommons/SAAINTDB' dataset.
89
 
 
 
 
 
 
 
 
 
90
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
  ## Citation
92
  @article{Huang2025,
93
  title = {SAAINT-DB: a comprehensive structural antibody database for antibody modeling and design},
 
9
  - pdb
10
  - rosettacommons
11
  pretty_name: Antibody dataset
12
+ repo: https://github.com/tommyhuangthu/SAAINT
13
  dataset_summary: >-
14
+ This dataset is a curated and processed version of the antibody dataset originally introduced in the SAAINT-DB paper. We converted the original dataset into a structured dataset compatible with the Hugging Face Datasets. The dataset contains 21,400 antibody entries derived from 11,304 PDB structures.
15
+ Citation_bibtex: |-
16
  @article{Huang2025,
17
  title = {SAAINT-DB: a comprehensive structural antibody database for antibody modeling and design},
18
  volume = {46},
 
30
  ---
31
 
32
  # SAAINTDB
33
+ This dataset is a curated version of the [SAAINT-DB](https://www.nature.com/articles/s41401-025-01608-5) converted into a format compatible with the Hugging Face Datasets for machine learning applications.
34
+
35
+ The dataset contains 21,400 antibody entries derived from 11,304 PDB structures, reflecting the available structures as of February 2026. Each entry corresponds to an antibody chain and is uniquely identified using the PDB_ID_chain field (PDB ID + chain ID).
36
+
37
 
38
  ## Dataset Splits
39
  The dataset was split at the PDB level into train, validation, and test sets (70/15/15).
40
  To maintain balanced distributions, the split was stratified based on the HL label (heavy/light chain availability).
41
  To prevent data leakage, all entries originating from the same PDB ID were assigned to the same split.
42
+ Note: There are some PDBs with multiple antibodies, so the number of PDB files are fewer than the number of data entries.
43
 
44
  The resulting splits are provided as CSV files in the `data/` directory.
45
  The corresponding PDB structures for each split are also provided in the `PDB/` directory.
 
50
 
51
 
52
  ## Dataset Processing
53
+ Processing scripts are provided in the `src/` directory.
54
+
55
  The following preprocessing steps were performed to construct the dataset:
56
+ 1. Added a `PDB_ID_chain` column to uniquely identify each antibody entry by concatenating the PDB ID with the corresponding antibody chain identifier. This ensures that multiple antibody chains originating from the same PDB structure can be distinguished and treated as separate entries
57
 
58
  2. Added an `hl_label` column indicating chain availability:
59
  - `HL`: both heavy and light chains present
 
62
  This label was later used for balanced dataset splitting.
63
 
64
  3. Some PDB entries referenced in the dataset were missing structure files.
65
+ We identified the missing entries and downloaded 111 mmCIF files from the RCSB Protein Data Bank (PDB), updating the dataset to reflect the available structures as of February 2026. This might happened becuase the up-to-date SAAINT-DB dataset was generated in February 2026, while the PDB files were uploaded in January 2026.
 
66
 
67
  4. FASTA files corresponding to the downloaded CIF structures were missing and were subsequently generated/added.
68
 
69
+ 5. The dataset was split into train, validation, and test sets (70/15/15).
70
 
71
  6. A `split` column was added to the dataset to indicate the assigned subset (`train`, `validation`, or `test`).
72
 
 
90
  >>> import datasets
91
 
92
 
93
+ ### Load Dataset
94
+ Load SAAINTDB dataset.
95
 
96
+ >>> SAAINTDB = datasets.load_dataset('RosettaCommons/SAAINTDB')
97
+ README.md: 4.01kB [00:00, 18.8MB/s]
98
+ train.csv: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 18.0M/18.0M [00:00<00:00, 64.8MB/s]
99
+ validation.csv: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 3.82M/3.82M [00:00<00:00, 34.6MB/s]
100
+ test.csv: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 3.82M/3.82M [00:00<00:00, 58.8MB/s]
101
+ Generating train split: 100%|██████████████████████████████████████████████████████████████████████████| 15033/15033 [00:00<00:00, 42972.11 examples/s]
102
+ Generating validation split: 100%|███████████████████████████████████████████████████████████████████████| 3179/3179 [00:00<00:00, 43227.91 examples/s]
103
+ Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████| 3188/3188 [00:00<00:00, 46207.52 examples/s]
104
 
105
+ and the dataset is loaded as a `datasets.arrow_dataset.Dataset`
106
+
107
+ >>> SAAINTDB
108
+ DatasetDict({
109
+ train: Dataset({
110
+ features: ['PDB_ID_chain', 'PDB_ID', 'Title', 'Mutation(s)', 'Classification', 'Deposit_date', 'Release_date', 'Method', 'Resolution', 'R_free', 'R_work', 'PMID', 'DOI', 'Model_index', 'Asym_ID_type', 'Ab_type', 'H_subgroup', 'L_subgroup', 'H_chain_ID', 'L_chain_ID', 'H_fas_seq', 'L_fas_seq', 'H_filled_pdb_seq', 'L_filled_pdb_seq', 'H_mean_radius', 'L_mean_radius', 'H_fas_seq_len', 'L_fas_seq_len', 'H_pdb_seq_len', 'L_pdb_seq_len', 'H_filled_seq_len', 'L_filled_seq_len', 'HL_inf_res_num', 'H_mol_name', 'L_mol_name', 'H_species', 'L_species', 'Ag_chain_ID(s)', 'Ag_type(s)', 'Ag_mol_name(s)', 'Ag_species', 'Ab_ag_inf_res_num', 'CDR_inf_res_num', 'CDR_inf_res_ratio', 'hl_label', 'split'],
111
+ num_rows: 15033
112
+ })
113
+ validation: Dataset({
114
+ features: ['PDB_ID_chain', 'PDB_ID', 'Title', 'Mutation(s)', 'Classification', 'Deposit_date', 'Release_date', 'Method', 'Resolution', 'R_free', 'R_work', 'PMID', 'DOI', 'Model_index', 'Asym_ID_type', 'Ab_type', 'H_subgroup', 'L_subgroup', 'H_chain_ID', 'L_chain_ID', 'H_fas_seq', 'L_fas_seq', 'H_filled_pdb_seq', 'L_filled_pdb_seq', 'H_mean_radius', 'L_mean_radius', 'H_fas_seq_len', 'L_fas_seq_len', 'H_pdb_seq_len', 'L_pdb_seq_len', 'H_filled_seq_len', 'L_filled_seq_len', 'HL_inf_res_num', 'H_mol_name', 'L_mol_name', 'H_species', 'L_species', 'Ag_chain_ID(s)', 'Ag_type(s)', 'Ag_mol_name(s)', 'Ag_species', 'Ab_ag_inf_res_num', 'CDR_inf_res_num', 'CDR_inf_res_ratio', 'hl_label', 'split'],
115
+ num_rows: 3179
116
+ })
117
+ test: Dataset({
118
+ features: ['PDB_ID_chain', 'PDB_ID', 'Title', 'Mutation(s)', 'Classification', 'Deposit_date', 'Release_date', 'Method', 'Resolution', 'R_free', 'R_work', 'PMID', 'DOI', 'Model_index', 'Asym_ID_type', 'Ab_type', 'H_subgroup', 'L_subgroup', 'H_chain_ID', 'L_chain_ID', 'H_fas_seq', 'L_fas_seq', 'H_filled_pdb_seq', 'L_filled_pdb_seq', 'H_mean_radius', 'L_mean_radius', 'H_fas_seq_len', 'L_fas_seq_len', 'H_pdb_seq_len', 'L_pdb_seq_len', 'H_filled_seq_len', 'L_filled_seq_len', 'HL_inf_res_num', 'H_mol_name', 'L_mol_name', 'H_species', 'L_species', 'Ag_chain_ID(s)', 'Ag_type(s)', 'Ag_mol_name(s)', 'Ag_species', 'Ab_ag_inf_res_num', 'CDR_inf_res_num', 'CDR_inf_res_ratio', 'hl_label', 'split'],
119
+ num_rows: 3188
120
+ })
121
+ })
122
+
123
+ which is a column oriented format that can be accessed directly, converted in to a `pandas.DataFrame`, or `parquet` format, e.g.
124
+
125
+ >>> SAAINTDB.data.column('pdb')
126
+ >>> SAAINTDB.to_pandas()
127
+ >>> SAAINTDB.to_parquet("dataset.parquet")
128
+
129
+
130
+ ## Uses
131
+ This dataset is intended for training and evaluating machine learning models on antibody structural data, particularly for tasks involving antibody chain characterization and antibody–antigen interaction analysis.
132
+
133
+ Because the dataset provides paired structural files (PDB), sequences (FASTA), and tabular metadata, it is suitable for workflows that integrate structural bioinformatics with machine learning.
134
+
135
+ Note: Be aware that when downloading the CSV files and opening them in Google Sheets or Microsoft Excel, the PDB_ID column may be automatically converted to scientific notation (e.g., 6e10 may appear as 6.00E10).
136
+
137
+
138
  ## Citation
139
  @article{Huang2025,
140
  title = {SAAINT-DB: a comprehensive structural antibody database for antibody modeling and design},
data/test.csv CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:420ba6e49369c4a89081c3d366c76892051e97f7dd9ffc180d24109e43f6c58f
3
- size 3817000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c6e26167e252ed3c2749aed30f99523a5ab694cf410823ace9b8ec7c1586d6f
3
+ size 3522791
data/train.csv CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:806d9f14aec09af6b34190da4438a4649448f5836895fa8b652e0e0d573640ef
3
- size 17958978
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebb56f3d59ee2b19cde4d8df3f26e4da40abeda81ddb47d642462d0205ee227d
3
+ size 16570352
data/validation.csv CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2fe5ba9ae100ca239a704c30860622e91fa2005d1cf9e3820d4ea9657e244263
3
- size 3819755
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5efb7f699df4f0f6c402e8204138a3ffc84ddc4245381f8859e906dd66bcc368
3
+ size 3525824