ashirato's picture
initial: Qwen2.5-Coder-7B + Surrogate-1 v1 LoRA on ZeroGPU A10G
fe83bcf verified

A newer version of the Gradio SDK is available: 6.14.0

Upgrade
metadata
title: Surrogate-1 ZeroGPU
emoji: 🚀
colorFrom: indigo
colorTo: purple
sdk: gradio
sdk_version: 4.44.0
app_file: app.py
pinned: true
license: apache-2.0
short_description: Surrogate-1 v1 LoRA on Qwen2.5-Coder-7B (ZeroGPU A10G)
suggested_hardware: zero-a10g
hf_oauth: false
models:
  - Qwen/Qwen2.5-Coder-7B-Instruct
  - axentx/surrogate-1-coder-7b-lora-v1

Surrogate-1 ZeroGPU

DevSecOps + SRE + coding agent. Qwen2.5-Coder-7B-Instruct + Surrogate-1 v1 LoRA, served via HF ZeroGPU (A10G, 60-120s per request).

Endpoints

  • Web UI: this Space (Gradio chat)
  • OpenAI-compatible: /api/predict (Gradio API auto-generated)
  • Use programmatically: gradio_client.Client("ashirato/surrogate-1-zero-gpu")

Why ZeroGPU

PRO unlocks 25K minutes/mo of A10G time at $0/mo. Each request gets fresh GPU, so cold-start ~5-10s but no idle cost. Perfect for low-traffic agentic loops (self-improve, constitutional, validator-RLVR judge calls).

Connected to axentx/surrogate-1

This Space serves inference. The orchestration Space at axentx/surrogate-1 runs cron loops + bulk-mirror harvest + state DBs; those loops can call THIS endpoint for actual model output instead of free-tier API ladder.