BioMatrix-1.7B-SFT
BioMatrix is a multimodal biological foundation model that natively integrates 1D sequences, 3D structures, and natural language for both molecules and proteins within a single decoder-only architecture.
This is the 1.7B-parameter SFT (Supervised Fine-Tuned) variant, instruction-tuned across 80 downstream biological tasks spanning 6 categories. For a larger and more capable model, see BioMatrix-4B-SFT.
- 📄 Paper: BioMatrix: Towards a Comprehensive Biological Foundation Model Spanning the Modality Matrix of Sequences, Structures, and Language
- 💻 Code: https://github.com/QizhiPei/BioMatrix
- 🤗 Model & Data Collection: https://huggingface.co/collections/QizhiPei/biomatrix
Model Overview
BioMatrix closes the gap between native multimodality and broad entity coverage in biological foundation models. Unlike adapter-based approaches that bolt external encoders onto a language model, or prior native-tokenization models confined to a single entity type, BioMatrix maps all modalities into a shared discrete token space via a unified tokenization scheme:
- Molecular 1D sequences (both SMILES and SELFIES notations)
- Molecular 3D structures (via MolStrucTok with branch-decoupled decoder)
- Protein 1D sequences (residue-level tokens)
- Protein 3D structures (via GCP-VQVAE backbone tokenizer)
- Natural language (inherited from Qwen3 tokenizer)
All modalities are consumed and produced uniformly under a single next-token prediction objective—without external encoders, projection adapters, or modality-specific output heads.
| Model | Molecule 1D | Molecule 3D | Protein 1D | Protein 3D | Natural Language |
|---|---|---|---|---|---|
| ESM3 | ✗ | ✗ | ✓ | ✓ | ✓ |
| 3D-MoLM | ✓ | ✓ | ✗ | ✗ | ✓ |
| AlphaFold3 | ✓ | ✓ | ✓ | ✓ | ✗ |
| BioT5/BioT5+ | ✓ | ✗ | ✓ | ✗ | ✓ |
| BioMedGPT | ✓ | ✗ | ✓ | ✗ | ✓ |
| NatureLM | ✓ | ✗ | ✓ | ✗ | ✓ |
| SciReasoner | ✓ | ✗ | ✓ | ✗ | ✓ |
| BioMatrix | ✓ | ✓ | ✓ | ✓ | ✓ |
Model Details
- Base Architecture: Qwen3-1.7B-Base
- Parameters: 1.7B
- Training Stages:
- Continual Pretraining on 304.4B tokens (general/scientific text, molecular & protein 1D/3D data, cross-modal interleaved corpora)
- Instruction Tuning on a comprehensive suite of 80 downstream tasks across 6 categories
- Context Length: 8,192 tokens
- Tokenizer: Extended Qwen3 vocabulary with:
- 11,294 joint molecular 3D tokens (composed from SELFIES atom × MolStrucTok codes)
- 4,096 protein 3D tokens (GCP-VQVAE codebook)
- 26 protein 1D tokens (amino acids + non-standard/unknown)
- SELFIES atom tokens and modality-specific control tokens
Pretraining Corpus (304.4B tokens)
| Category | Tokens | Sources |
|---|---|---|
| Text | 105.3B | FineWeb-Edu, FineFineWeb (biology/chemistry/medical/health), PubMed Full Articles |
| Molecule | 73.7B | PubChem, PCQM4Mv2, PubChemQC, MolTextNet |
| Protein | 77.4B | UniRef50, RCSB PDB, Swiss-Prot, TrEMBL, AlphaFold DB |
| Cross-entity | 48.0B | Interleaved text (PubMed, bioRxiv, S2ORC, USPTO), Molecule–protein (BindingDB, STITCH, jglaser, CrossDocked), Protein–protein (AlphaSeq, PPIRef) |
Performance Highlights
Despite its compact 1.7B size, BioMatrix delivers strong performance across diverse biological tasks—often surpassing models several times larger. Selected highlights:
Molecular Tasks
- Unconditional 1D Generation (GuacaMol, SELFIES): 0.999 validity, 1.000 uniqueness
- Name Conversion (I2S EM): 87.22% (surpasses SciReasoner-8B at 84.40%)
- Text-Based Molecule Generation (EM): 56.35% (vs. SciReasoner-8B: 48.00%)
- MoleculeQA Total: 70.07% (vs. prior best MolCA-1.3B: 64.79%)
- Property-Conditioned 3D Generation: ~3-4× error reduction on QM9 electronic-structure targets
Protein Tasks
- Fold Type Prediction (Family level): 85.84% accuracy
- EC Number Prediction (Price split, F1): 34.34% (surpasses SciReasoner-8B at 22.00%)
- Inverse Folding AAR: 75.20% (vs. DPLM-2-3B: 61.67%)
- Sequence–Structure Co-generation: scTM = 0.965, scRMSD = 2.81
Interaction Tasks
- BindingDB Affinity (RMSE): 1.268
- PDBBindv2020 3D Affinity: best Spearman correlation (0.717) among all baselines
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "QizhiPei/BioMatrix-1.7B-SFT"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
# Example: Molecule captioning with SELFIES input
instruction = "I need a brief explanation of the molecule denoted in this SELFIES notation. <|mol_sfi_start|>[Te]<|mol_sfi_end|>"
messages = [
{"role": "user", "content": instruction}
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048, do_sample=False)
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=False)
print(response)
Modality Wrapping
When constructing prompts, biomolecular content must be wrapped with the corresponding control tokens:
| Modality | Wrapping Example |
|---|---|
| Molecule SMILES | <|mol_smi_start|>CC#CC#N<|mol_smi_end|> |
| Molecule SELFIES | <|mol_sfi_start|>[C][#C][C][#N]<|mol_sfi_end|> |
| Molecule 3D | <|mol_3d_start|>[H 3][C 0][#C 6]...<|mol_3d_end|> |
| Protein 1D | <|prot_aa_start|><A M><A R><A A>...<|prot_aa_end|> |
| Protein 3D | <|prot_3d_start|><S 4012><S 153><S 2091>...<|prot_3d_end|> |
Natural language text is left unwrapped and serves as the default carrier modality.
Supported Tasks
BioMatrix-1.7B-SFT was instruction-tuned across the following task categories:
Molecule (1D): unconditional generation, name conversion, property prediction, captioning, text-based generation, forward/retrosynthesis, editing, optimization, customized generation, question answering
Molecule (3D): unconditional generation, property-conditioned generation
Protein (1D): sequence understanding, annotation prediction, knowledge mining, text-based design, unconditional generation
Protein (3D): structure understanding, folding, inverse folding, sequence-structure co-generation, unconditional backbone generation
Interaction: molecule-protein binding affinity (1D & 3D), protein-protein interaction
Note on task-group variants: As detailed in the paper, the released SFT model is trained on the union of all sub-task corpora with mild oversampling for small-data tasks. For best performance on specific benchmarks, please refer to the paper's task-group-specific variants.
SMILES vs. SELFIES
BioMatrix supports both notations as parallel 1D molecular representations. Empirically:
- SELFIES excels on tasks requiring validity-by-construction (unconditional generation, property optimization)
- SMILES excels on tasks requiring surface-level structural anchoring (customized generation with atom/bond/functional-group constraints, forward synthesis, retrosynthesis)
See Section 9.2 of the paper for detailed analysis.
Limitations
- Molecular and protein 3D structures are tokenized in disjoint geometric reference frames, so the model cannot natively represent biomolecular complexes (e.g., docking poses).
- Heavy domain specialization may erode some general-purpose language capabilities of the underlying Qwen3 backbone.
- Coverage is limited to small molecules and proteins; nucleic acids, carbohydrates, and lipids are not currently supported.
- Fine-grained 3D geometry (e.g., bond lengths) shows residual quantization error from finite codebooks; a lightweight post-hoc force-field refinement (e.g., MMFF) closes most of this gap.
Citation
If you find BioMatrix useful, please cite:
@article{pei2026biomatrix,
title={BioMatrix: Towards a Comprehensive Biological Foundation Model Spanning the Modality Matrix of Sequences, Structures, and Language},
author={Pei, Qizhi and Zhou, Zhimeng and Duan, Yi and Zhao, Yiyang and He, Liang and Hsieh, Chang-Yu and He, Conghui and Yan, Rui and Wu, Lijun},
year={2026}
}
License
This model is released under the Apache 2.0 license. The base model (Qwen3-1.7B-Base) is subject to its own license terms.
- Downloads last month
- 12
docker model run hf.co/QizhiPei/BioMatrix-1.7B-SFT