Spaces:
Running on Zero
Running on Zero
File size: 1,206 Bytes
8bdeff6 fe83bcf 8bdeff6 fe83bcf 8bdeff6 fe83bcf 8bdeff6 fe83bcf 8bdeff6 fe83bcf | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | ---
title: Surrogate-1 ZeroGPU
emoji: 🚀
colorFrom: indigo
colorTo: purple
sdk: gradio
sdk_version: 4.44.0
app_file: app.py
pinned: true
license: apache-2.0
short_description: Surrogate-1 v1 LoRA on Qwen2.5-Coder-7B (ZeroGPU A10G)
suggested_hardware: zero-a10g
hf_oauth: false
models:
- Qwen/Qwen2.5-Coder-7B-Instruct
- axentx/surrogate-1-coder-7b-lora-v1
---
# Surrogate-1 ZeroGPU
DevSecOps + SRE + coding agent. Qwen2.5-Coder-7B-Instruct + Surrogate-1 v1
LoRA, served via HF ZeroGPU (A10G, 60-120s per request).
## Endpoints
- Web UI: this Space (Gradio chat)
- OpenAI-compatible: `/api/predict` (Gradio API auto-generated)
- Use programmatically: `gradio_client.Client("ashirato/surrogate-1-zero-gpu")`
## Why ZeroGPU
PRO unlocks 25K minutes/mo of A10G time at $0/mo. Each request gets fresh
GPU, so cold-start ~5-10s but no idle cost. Perfect for low-traffic
agentic loops (self-improve, constitutional, validator-RLVR judge calls).
## Connected to axentx/surrogate-1
This Space serves inference. The orchestration Space at
`axentx/surrogate-1` runs cron loops + bulk-mirror harvest + state DBs;
those loops can call THIS endpoint for actual model output instead of
free-tier API ladder.
|