GRALE: Graph-Level Autoencoder

arXiv Model Weights NeurIPS 2025 GitHub

GRALE Architecture

Overview

GRALE is a novel graph autoencoder that encodes and decodes graphs of varying sizes into a shared embedding space. Built on an Evoformer-based attention architecture (AlphaFold's core component), GRALE uses an Optimal Transport-inspired loss and differentiable node matching to enable general pre-training for diverse downstream tasks—from classification and regression to graph interpolation, editing, matching, and prediction.

Quick Start

from GRALE.main import GRALE_model
from huggingface_hub import hf_hub_download

# Load pretrained model
checkpoint = hf_hub_download(repo_id="PaulKrzakala/GRALE-128-32", filename="last.ckpt")
model = GRALE_model.load_from_checkpoint(checkpoint)

# Encode and decode graphs
embeddings = model.encode(graph_data)
reconstructed = model.decode(embeddings)

See demo.ipynb for full examples.

Resources

Installation

# Python Version: 3.11.14
# Clone the repo
git clone https://github.com/KrzakalaPaul/GRALE.git
cd GRALE

# Create your environment
python3 -m venv .venv
source .venv/bin/activate

# Install dependencies
pip install torch==2.8.0 # --index-url https://download.pytorch.org/whl/cu126 (cuda version)
pip install -r requirements.txt

Citation

@article{krzakala2025quest,
  title={The quest for the GRAph Level autoEncoder (GRALE)},
  author={Krzakala, Paul and Melo, Gabriel and Laclau, Charlotte and d'Alch{\'e}-Buc, Florence and Flamary, R{\'e}mi},
  journal={arXiv preprint arXiv:2505.22109},
  year={2025}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for PaulKrzakala/GRALE-128-32