How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for ronitraj/quantumscribe to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for ronitraj/quantumscribe to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for ronitraj/quantumscribe to start chatting
Load model with FastModel
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
    model_name="ronitraj/quantumscribe",
    max_seq_length=2048,
)
Quick Links

QuantumScribe (GRPO LoRA)

LoRA adapter fine-tuned with GRPO for logical quantum error correction, on top of base unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit.

Adapter

  • LoRA r=16, lora_alpha=32, lora_dropout=0.1
  • Target: q_proj, k_proj, v_proj, o_proj (PEFT 0.18.1)

Training

Eval (from project data/eval_grpo.json)

  • Logical correction rate high (~0.96 on the recorded run)
  • pymatching_beat reported at 0 on the evaluated split — align narrative and metrics (continuous vs threshold) with your harness and README

Load

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base_id = "unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit"
adapter_id = "ronitraj/quantumscribe"
tokenizer = AutoTokenizer.from_pretrained(adapter_id)
model = AutoModelForCausalLM.from_pretrained(
    base_id, device_map="auto", trust_remote_code=True
)
model = PeftModel.from_pretrained(model, adapter_id)
Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support