Euler-Prover-8B

This repository contains the LoRA adapter weights for Euler-Prover-8B, a Qwen3-based model tuned for Lean 4 theorem proving on the DiffLean pipeline.

This is not a merged full model checkpoint. To use it, load the corresponding base model Qwen/Qwen3-8B and then apply this adapter.

Base Model

  • Base model: Qwen/Qwen3-8B
  • Adapter type: LoRA
  • Adapter rank: 64
  • Training framework: LLaMA-Factory
  • Exported checkpoint: checkpoint-4614

Intended Use

The adapter is intended for:

  • Lean 4 theorem proving
  • Formal mathematical reasoning
  • Generating proof plans and Lean proof completions

Files Included

  • adapter_model.safetensors
  • adapter_config.json
  • tokenizer.json
  • tokenizer_config.json
  • chat_template.jinja

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model_id = "Qwen/Qwen3-8B"
adapter_id = "DiffLean/Euler-Prover-8B"

tokenizer = AutoTokenizer.from_pretrained(adapter_id, trust_remote_code=True)
base_model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    torch_dtype="auto",
    device_map="auto",
    trust_remote_code=True,
)
model = PeftModel.from_pretrained(base_model, adapter_id)

Prompt Format

These adapters were trained with a Lean-oriented instruction prompt that asks the model to:

  1. read a Lean 4 formal statement,
  2. produce a proof plan,
  3. produce the final Lean 4 proof.

Notes

  • This repo contains adapter weights only.
  • The tokenizer files and chat template are included for convenience.
  • Performance depends on the matching Qwen3 base model and generation settings.
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DiffLean/Euler-Prover-8B

Finetuned
Qwen/Qwen3-8B
Adapter
(1043)
this model