QizhiPei commited on
Commit
ca2d66e
·
verified ·
1 Parent(s): 3233c10

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +169 -40
README.md CHANGED
@@ -1,60 +1,189 @@
1
  ---
2
- library_name: transformers
3
- license: other
4
- base_model: QizhiPei/BioMatrix-4B-Base
5
  tags:
6
- - llama-factory
7
- - full
8
- - generated_from_trainer
9
- model-index:
10
- - name: sft_pk_checkpoint_cpt_4b_biom_cpt_ds_32gpus_z0_merge_all_v1_ml2048_eps5_seed42
11
- results: []
 
 
 
 
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- # sft_pk_checkpoint_cpt_4b_biom_cpt_ds_32gpus_z0_merge_all_v1_ml2048_eps5_seed42
18
 
19
- This model is a fine-tuned version of [QizhiPei/BioMatrix-4B-Base](https://huggingface.co/QizhiPei/BioMatrix-4B-Base) on the guacamol_smi, the guacamol_sfi, the moses_smi, the moses_sfi, the pdbbind_v2020_mol_first_train, the pdbbind_v2020_mol_first_train, the pdbbind_v2020_mol_first_train, the pdbbind_v2020_mol_first_train, the knowmol_smi_history, the knowmol_smi_nohistory, the knowmol_sfi_history, the knowmol_sfi_nohistory, the mol_smi_all_dy, the mol_sfi_all_dy, the pro_all_zm_v1, the megascience, the yeast_peer, the yeast_peer, the human_peer, the human_peer, the ppi_affinity_peer, the ppi_affinity_peer, the pdbbind_peer, the pdbbind_peer, the bindingdb_peer, the bindingdb_peer, the qm9_2014_uncond, the qm9_2014_uncond, the qm9_2014_uncond, the qm9_2014_uncond, the qm9_2014_condx10, the qm9_2014_condx10, the qm9_2014_condx10, the pfud_3d_1d, the pfud_3d_1d, the pfud_3d_1d, the pfud_3d_1d, the dplm_1d_to_3d, the dplm_1d_to_3d, the dplm_1d_to_3d, the dplm_1d_to_3d, the dplm_1d_to_3d, the dplm_1d_to_3d, the dplm_1d_to_3d, the dplm_1d_to_3d, the dplm_3d_to_1d, the dplm_3d_to_1d, the dplm_3d_to_1d, the dplm_3d_to_1d, the dplm_3d_to_1d, the dplm_3d_to_1d, the dplm_3d_to_1d, the dplm_3d_to_1d, the dplm_1d_3d, the dplm_1d_3d, the dplm_1d_3d, the dplm_1d_3d, the dplm_1d_3d, the dplm_1d_3d, the dplm_1d_3d, the dplm_1d_3d, the dplm_3d_1d, the dplm_3d_1d, the dplm_3d_1d, the dplm_3d_1d, the dplm_3d_1d, the dplm_3d_1d, the dplm_3d_1d and the dplm_3d_1d datasets.
20
 
21
- ## Model description
22
 
23
- More information needed
24
 
25
- ## Intended uses & limitations
26
 
27
- More information needed
28
 
29
- ## Training and evaluation data
30
 
31
- More information needed
 
32
 
33
- ## Training procedure
34
 
35
- ### Training hyperparameters
36
 
37
- The following hyperparameters were used during training:
38
- - learning_rate: 5e-05
39
- - train_batch_size: 32
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - distributed_type: multi-GPU
43
- - num_devices: 8
44
- - total_train_batch_size: 256
45
- - total_eval_batch_size: 64
46
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
- - lr_scheduler_type: cosine_with_min_lr
48
- - lr_scheduler_warmup_ratio: 0.1
49
- - num_epochs: 5.0
50
 
51
- ### Training results
52
 
 
53
 
 
 
 
 
 
 
 
54
 
55
- ### Framework versions
56
 
57
- - Transformers 4.51.3
58
- - Pytorch 2.6.0+cu124
59
- - Datasets 3.5.0
60
- - Tokenizers 0.21.1
 
1
  ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
  tags:
6
+ - biology
7
+ - chemistry
8
+ - molecule
9
+ - protein
10
+ - multimodal
11
+ - foundation-model
12
+ - drug-discovery
13
+ - protein-design
14
+ pipeline_tag: text-generation
15
+ base_model: QizhiPei/BioMatrix-4B-Base
16
+ library_name: transformers
17
  ---
18
 
19
+ # BioMatrix-4B-SFT
20
+
21
+ **BioMatrix** is a multimodal biological foundation model that natively integrates **1D sequences**, **3D structures**, and **natural language** for both **molecules** and **proteins** within a single decoder-only architecture.
22
+
23
+ This is the **4B-parameter SFT (Supervised Fine-Tuned)** variant, instruction-tuned across 80 downstream biological tasks spanning 6 categories.
24
+
25
+ - 📄 **Paper**: [BioMatrix: Towards a Comprehensive Biological Foundation Model Spanning the Modality Matrix of Sequences, Structures, and Language](https://arxiv.org/abs/xxxx.xxxxx)
26
+ - 💻 **Code**: [https://github.com/QizhiPei/biomatrix](https://github.com/QizhiPei/biomatrix)
27
+ - 🤗 **Model & Data Collection**: [https://huggingface.co/collections/QizhiPei/biomatrix](https://huggingface.co/collections/QizhiPei/biomatrix)
28
+
29
+ ## Model Overview
30
+
31
+ BioMatrix closes the gap between native multimodality and broad entity coverage in biological foundation models. Unlike adapter-based approaches that bolt external encoders onto a language model, or prior native-tokenization models confined to a single entity type, BioMatrix maps **all modalities into a shared discrete token space** via a unified tokenization scheme:
32
+
33
+ - **Molecular 1D sequences** (both SMILES and SELFIES notations)
34
+ - **Molecular 3D structures** (via MolStrucTok with branch-decoupled decoder)
35
+ - **Protein 1D sequences** (residue-level tokens)
36
+ - **Protein 3D structures** (via GCP-VQVAE backbone tokenizer)
37
+ - **Natural language** (inherited from Qwen3 tokenizer)
38
+
39
+ All modalities are consumed and produced uniformly under a **single next-token prediction objective**—without external encoders, projection adapters, or modality-specific output heads.
40
+
41
+ | Model | Molecule 1D | Molecule 3D | Protein 1D | Protein 3D | Natural Language |
42
+ |-------|:-----------:|:-----------:|:----------:|:----------:|:----------------:|
43
+ | ESM3 | ✗ | ✗ | ✓ | ✓ | ✓ |
44
+ | 3D-MoLM | ✓ | ✓ | ✗ | ✗ | ✓ |
45
+ | AlphaFold3 | ✓ | ✓ | ✓ | ✓ | ✗ |
46
+ | BioT5/BioT5+ | ✓ | ✗ | ✓ | ✗ | ✓ |
47
+ | BioMedGPT | ✓ | ✗ | ✓ | ✗ | ✓ |
48
+ | NatureLM | ✓ | ✗ | ✓ | ✗ | ✓ |
49
+ | SciReasoner | ✓ | ✗ | ✓ | ✗ | ✓ |
50
+ | **BioMatrix** | **✓** | **✓** | **✓** | **✓** | **✓** |
51
+
52
+ ## Model Details
53
+
54
+ - **Base Architecture**: Qwen3-4B-Base
55
+ - **Parameters**: 4B
56
+ - **Training Stages**:
57
+ - **Continual Pretraining** on 304.4B tokens (general/scientific text, molecular & protein 1D/3D data, cross-modal interleaved corpora)
58
+ - **Instruction Tuning** on a comprehensive suite of 80 downstream tasks across 6 categories
59
+ - **Context Length**: 8,192 tokens
60
+ - **Tokenizer**: Extended Qwen3 vocabulary with:
61
+ - 11,294 joint molecular 3D tokens (composed from SELFIES atom × MolStrucTok codes)
62
+ - 4,096 protein 3D tokens (GCP-VQVAE codebook)
63
+ - 26 protein 1D tokens (amino acids + non-standard/unknown)
64
+ - SELFIES atom tokens and modality-specific control tokens
65
+
66
+ ## Pretraining Corpus (304.4B tokens)
67
+
68
+ | Category | Tokens | Sources |
69
+ |----------|--------|---------|
70
+ | **Text** | 105.3B | FineWeb-Edu, FineFineWeb (biology/chemistry/medical/health), PubMed Full Articles |
71
+ | **Molecule** | 73.7B | PubChem, PCQM4Mv2, PubChemQC, MolTextNet |
72
+ | **Protein** | 77.4B | UniRef50, RCSB PDB, Swiss-Prot, TrEMBL, AlphaFold DB |
73
+ | **Cross-entity** | 48.0B | Interleaved text (PubMed, bioRxiv, S2ORC, USPTO), Molecule–protein (BindingDB, STITCH, jglaser, CrossDocked), Protein–protein (AlphaSeq, PPIRef) |
74
+
75
+ ## Performance Highlights
76
+
77
+ BioMatrix achieves **state-of-the-art or competitive performance on 77 out of 80 tasks**. Selected highlights for the 4B-SFT variant:
78
+
79
+ ### Molecular Tasks
80
+ - **Unconditional 1D Generation** (GuacaMol): 0.998 validity, 1.000 uniqueness, 0.986 novelty
81
+ - **Name Conversion (I2S EM)**: 92.83% (vs. SciReasoner-8B: 84.40%)
82
+ - **Text-Based Molecule Generation (EM)**: 65.07% (vs. SciReasoner-8B: 48.00%)
83
+ - **MoleculeQA Total Accuracy**: 73.78% (vs. prior best MolCA-1.3B: 64.79%)
84
+ - **Property-Conditioned 3D Generation**: ~3-4× error reduction on QM9 electronic-structure targets
85
+
86
+ ### Protein Tasks
87
+ - **Fold Type Prediction (Family level)**: 87.25% accuracy
88
+ - **Annotation Prediction (UniProtSeq Keywords F1)**: 91.26%
89
+ - **Inverse Folding AAR**: 75.50% (vs. DPLM-2-3B: 61.67%)
90
+ - **Sequence–Structure Co-generation**: scTM = 0.965, scRMSD = 2.80
91
+ - **Unconditional Backbone Generation**: scTM = 0.963 (joint frontier with RFDiffusion)
92
+
93
+ ### Interaction Tasks
94
+ - **BindingDB Affinity (RMSE)**: 1.030 (new SOTA, surpasses prior literature SOTA of 1.340)
95
+ - **PDBBindv2020 3D Affinity**: RMSE = 1.260, Pearson = 0.737, MAE = 0.972
96
+
97
+ ## Quick Start
98
+
99
+ ```python
100
+ from transformers import AutoModelForCausalLM, AutoTokenizer
101
+
102
+ model_name = "QizhiPei/BioMatrix-4B-SFT"
103
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
104
+ model = AutoModelForCausalLM.from_pretrained(
105
+ model_name,
106
+ torch_dtype="auto",
107
+ device_map="auto",
108
+ trust_remote_code=True
109
+ )
110
+
111
+ # Example: Molecule captioning with SELFIES input
112
+ instruction = "I need a brief explanation of the molecule denoted in this SELFIES notation. <|mol_sfi_start|>[Te]<|mol_sfi_end|>"
113
+
114
+ messages = [
115
+ {"role": "user", "content": instruction}
116
+ ]
117
+ prompt = tokenizer.apply_chat_template(
118
+ messages,
119
+ tokenize=False,
120
+ add_generation_prompt=True
121
+ )
122
+
123
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
124
+ outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False)
125
+ response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=False)
126
+ print(response)
127
+ ```
128
+
129
+ ## Modality Wrapping
130
+
131
+ When constructing prompts, biomolecular content must be wrapped with the corresponding control tokens:
132
+
133
+ | Modality | Wrapping Example |
134
+ |----------|------------------|
135
+ | Molecule SMILES | `<\|mol_smi_start\|>CC#CC#N<\|mol_smi_end\|>` |
136
+ | Molecule SELFIES | `<\|mol_sfi_start\|>[C][#C][C][#N]<\|mol_sfi_end\|>` |
137
+ | Molecule 3D | `<\|mol_3d_start\|>[H 3][C 0][#C 6]...<\|mol_3d_end\|>` |
138
+ | Protein 1D | `<\|prot_aa_start\|><A M><A R><A A>...<\|prot_aa_end\|>` |
139
+ | Protein 3D | `<\|prot_3d_start\|><S 4012><S 153><S 2091>...<\|prot_3d_end\|>` |
140
+
141
+ Natural language text is left unwrapped and serves as the default carrier modality.
142
+
143
+ ## Supported Tasks
144
+
145
+ BioMatrix-4B-SFT was instruction-tuned across the following task categories:
146
+
147
+ **Molecule (1D)**: unconditional generation, name conversion, property prediction, captioning, text-based generation, forward/retrosynthesis, editing, optimization, customized generation, question answering
148
 
149
+ **Molecule (3D)**: unconditional generation, property-conditioned generation
150
 
151
+ **Protein (1D)**: sequence understanding, annotation prediction, knowledge mining, text-based design, unconditional generation
152
 
153
+ **Protein (3D)**: structure understanding, folding, inverse folding, sequence-structure co-generation, unconditional backbone generation
154
 
155
+ **Interaction**: molecule-protein binding affinity (1D & 3D), protein-protein interaction
156
 
157
+ > **Note on task-group variants**: As detailed in the paper, the released SFT model is trained on the union of all sub-task corpora with mild oversampling for small-data tasks. For best performance on specific benchmarks, please refer to the paper's task-group-specific variants.
158
 
159
+ ## SMILES vs. SELFIES
160
 
161
+ BioMatrix supports both notations as parallel 1D molecular representations. Empirically:
162
 
163
+ - **SELFIES** excels on tasks requiring validity-by-construction (unconditional generation, property optimization)
164
+ - **SMILES** excels on tasks requiring surface-level structural anchoring (customized generation with atom/bond/functional-group constraints, forward synthesis, retrosynthesis)
165
 
166
+ See Section 9.2 of the paper for detailed analysis.
167
 
168
+ ## Limitations
169
 
170
+ - Molecular and protein 3D structures are tokenized in **disjoint geometric reference frames**, so the model cannot natively represent biomolecular complexes (e.g., docking poses).
171
+ - Heavy domain specialization may erode some general-purpose language capabilities of the underlying Qwen3 backbone.
172
+ - Coverage is limited to **small molecules and proteins**; nucleic acids, carbohydrates, and lipids are not currently supported.
173
+ - Fine-grained 3D geometry (e.g., bond lengths) shows residual quantization error from finite codebooks; a lightweight post-hoc force-field refinement (e.g., MMFF) closes most of this gap.
 
 
 
 
 
 
 
 
 
174
 
175
+ ## Citation
176
 
177
+ If you find BioMatrix useful, please cite:
178
 
179
+ ```bibtex
180
+ @article{pei2026biomatrix,
181
+ title={BioMatrix: Towards a Comprehensive Biological Foundation Model Spanning the Modality Matrix of Sequences, Structures, and Language},
182
+ author={Pei, Qizhi and Zhou, Zhimeng and Duan, Yi and Zhao, Yiyang and He, Liang and Hsieh, Chang-Yu and He, Conghui and Yan, Rui and Wu, Lijun},
183
+ year={2026}
184
+ }
185
+ ```
186
 
187
+ ## License
188
 
189
+ This model is released under the Apache 2.0 license. The base model (Qwen3-4B-Base) is subject to its own license terms.