Bbox-caption-8b / demo /README.md
SynLayers's picture
Upload demo/README.md with huggingface_hub
da959cf verified
|
raw
history blame
1.18 kB
---
title: SynLayers
emoji: "🧩"
colorFrom: blue
colorTo: purple
sdk: gradio
python_version: "3.10"
app_file: app.py
suggested_hardware: a100-large
startup_duration_timeout: 2h
short_description: "GPU Space for SynLayers real-world layer decomposition"
models:
- SynLayers/Bbox-caption-8b
pinned: false
---
# SynLayers Demo
This folder now contains a unified real-world inference demo:
1. `demo/infer` runs the fixed-prompt VLM caption + bbox detector.
2. `infer/infer.py` runs SynLayers decomposition with `infer/infer.yaml`.
3. `demo/real_world_pipeline.py` stitches the two stages together for one uploaded image.
4. `demo/app.py` provides a Gradio interface that can be used locally or adapted for a Hugging Face Space.
## Full GPU Space
For a production Hugging Face Space, use GPU hardware and set:
```text
SYNLAYERS_MODEL_REPO=SynLayers/Bbox-caption-8b
```
This lets the Space:
- load the bbox detector from your uploaded model repo root
- load SynLayers Pipeline
## Local Run
From the `SynLayers` root:
```bash
python demo/app.py
```
Or run the unified CLI directly:
```bash
python demo/real_world_pipeline.py \
--image "/path/to/your/image.png"
```