Spaces:
Running on Zero
Running on Zero
File size: 1,179 Bytes
5737d45 5d20bc4 5737d45 a3fa0d8 5d20bc4 cdc28e3 5d20bc4 a3fa0d8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | ---
title: SynLayers
emoji: "🧩"
colorFrom: blue
colorTo: purple
sdk: gradio
python_version: "3.10"
app_file: app.py
suggested_hardware: a100-large
startup_duration_timeout: 2h
short_description: "GPU Space for SynLayers real-world layer decomposition"
models:
- SynLayers/Bbox-caption-8b
pinned: false
---
# SynLayers Demo
This folder now contains a unified real-world inference demo:
1. `demo/infer` runs the fixed-prompt VLM caption + bbox detector.
2. `infer/infer.py` runs SynLayers decomposition with `infer/infer.yaml`.
3. `demo/real_world_pipeline.py` stitches the two stages together for one uploaded image.
4. `demo/app.py` provides a Gradio interface that can be used locally or adapted for a Hugging Face Space.
## Full GPU Space
For a production Hugging Face Space, use GPU hardware and set:
```text
SYNLAYERS_MODEL_REPO=SynLayers/Bbox-caption-8b
```
This lets the Space:
- load the bbox detector from your uploaded model repo root
- load SynLayers Pipeline
## Local Run
From the `SynLayers` root:
```bash
python demo/app.py
```
Or run the unified CLI directly:
```bash
python demo/real_world_pipeline.py \
--image "/path/to/your/image.png"
```
|