DDG ProTherm Refiner: VAE-Guided Stability Predictor
The DDG ProTherm Refiner is a state-of-the-art specialist model for predicting the change in protein stability ($\Delta\Delta G$) upon single-point mutations. It uses a high-capacity Variational Autoencoder (VAE) to learn a topological map of mutations, which is then refined by an MLP to achieve superior predictive accuracy.
Model Description
- Architecture: Specialist VAE (ProTherm) + MLP Residual Refiner.
- Latent Space: 32-dimensional embedding space capturing functional mutation clusters.
- Dataset: Trained on the curated ProTherm dataset (high-quality calorimetry measurements).
Scientific Discoveries
- Linear Stability Manifold: 94.7% of the variance in $\Delta\Delta G$ is explained by a single learned direction in the latent space.
- Cross-Protein Transfer: Discovers mutations that are functionally similar across different protein families, despite low sequence identity.
Performance
| Metric | Value |
|---|---|
| Spearman $ | |
| ho$ | 0.89 |
| Pearson $r$ | 0.88 |
| MAE | 0.53 kcal/mol |
Usage
import torch
from ddg_vae import DDGVAE
from ddg_mlp_refiner import DDGMLPRefiner
# 1. Load VAE base
vae = DDGVAE.create_protherm_variant(use_hyperbolic=False)
vae_ckpt = torch.load("vae_protherm.pt", map_location="cpu")
vae.load_state_dict(vae_ckpt["model_state_dict"])
# 2. Load MLP Refiner
refiner = DDGMLPRefiner(latent_dim=32, hidden_dims=[64, 64, 32])
ref_ckpt = torch.load("pytorch_model.bin", map_location="cpu")
refiner.load_state_dict(ref_ckpt["model_state_dict"])
# 3. Predict
vae.eval(); refiner.eval()
with torch.no_grad():
vae_out = vae(mutation_features)
mu = vae_out["mu"]
vae_pred = vae_out["ddg_pred"]
refined = refiner(mu, vae_pred)
ddg = refined["ddg_pred"]
print(f"Predicted DeltaDeltaG: {ddg.item():.4f} kcal/mol")
Citation
@software{ddg_refiner_2026,
author = {AI Whisperers},
title = {DDG ProTherm Refiner: VAE-Guided Stability Predictor},
year = {2026},
url = {https://huggingface.co/ai-whisperers/ddg-protherm-refiner-sota}
}
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support