SynLayers commited on
Commit
a3fa0d8
·
verified ·
1 Parent(s): fecfa50

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +126 -13
README.md CHANGED
@@ -1,13 +1,126 @@
1
- ---
2
- title: Synlayers
3
- emoji: 🚀
4
- colorFrom: red
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 6.14.0
8
- python_version: '3.13'
9
- app_file: app.py
10
- pinned: false
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SynLayers Demo
2
+
3
+ This folder now contains a unified real-world inference demo:
4
+
5
+ 1. `demo/infer` runs the fixed-prompt VLM caption + bbox detector.
6
+ 2. `infer/infer.py` runs SynLayers decomposition with `infer/infer.yaml`.
7
+ 3. `demo/real_world_pipeline.py` stitches the two stages together for one uploaded image.
8
+ 4. `demo/app.py` provides a Gradio interface that can be used locally or adapted for a Hugging Face Space.
9
+ 5. `demo/upload_used_bundle_to_hf.py` uploads only the Python/config files actually used by the demo, plus the selected runtime assets.
10
+
11
+ ## Local Run
12
+
13
+ From the `SynLayers` root:
14
+
15
+ ```bash
16
+ python demo/app.py
17
+ ```
18
+
19
+ Or run the unified CLI directly:
20
+
21
+ ```bash
22
+ python demo/real_world_pipeline.py \
23
+ --image "/path/to/your/image.png"
24
+ ```
25
+
26
+ ## Default Models
27
+
28
+ The current local defaults are:
29
+
30
+ - bbox-caption model:
31
+ `/project/llmsvgen/share/data/kmw_layered_checkpoint/Bbox-caption-8b`
32
+ - SynLayers base checkpoints:
33
+ `/project/llmsvgen/share/data/kmw_layered_checkpoint/SynLayers_checkpoints`
34
+ - SynLayers decomposition checkpoint:
35
+ `/project/llmsvgen/share/data/kmw_layered_checkpoint/SynLayers_ckpt/step_120000`
36
+ - base config:
37
+ `infer/infer.yaml`
38
+
39
+ ## Hugging Face Space Notes
40
+
41
+ The Gradio app is ready for a Hugging Face Space.
42
+ After you upload the model/runtime bundle to `SynLayers/Bbox-caption-8b`, the Space can download
43
+ those uploaded assets automatically and use them directly.
44
+
45
+ The app supports overriding the local defaults with environment variables:
46
+
47
+ - `SYNLAYERS_MODEL_REPO`
48
+ - `SYNLAYERS_BBOX_MODEL`
49
+ - `SYNLAYERS_BASE_MODEL`
50
+ - `SYNLAYERS_ADAPTER_MODEL`
51
+ - `SYNLAYERS_TRANSP_VAE`
52
+ - `SYNLAYERS_PRETRAINED_LORA`
53
+ - `SYNLAYERS_ARTPLUS_LORA`
54
+ - `SYNLAYERS_DECOMP_CKPT_ROOT`
55
+ - `SYNLAYERS_REAL_CONFIG`
56
+ - `SYNLAYERS_DEMO_WORK_DIR`
57
+ - `SYNLAYERS_EXAMPLE_DIR`
58
+
59
+ In practice, for a real Hugging Face Space deployment you will want to:
60
+
61
+ 1. upload the required model/runtime assets to `SynLayers/Bbox-caption-8b`
62
+ 2. create a Gradio Space repo, for example `SynLayers/synlayers-real-world-demo`
63
+ 3. upload the Space scaffold with `demo/publish_space.py`
64
+ 4. set `SYNLAYERS_MODEL_REPO=SynLayers/Bbox-caption-8b` in the Space settings
65
+ 5. launch `app.py` as the Space entrypoint
66
+
67
+ ### Public interface flow
68
+
69
+ 1. Upload the model/runtime bundle:
70
+
71
+ ```bash
72
+ python demo/upload_used_bundle_to_hf.py \
73
+ --repo-id SynLayers/Bbox-caption-8b
74
+ ```
75
+
76
+ 2. Create and upload the Space scaffold:
77
+
78
+ ```bash
79
+ python demo/publish_space.py \
80
+ --repo-id SynLayers/synlayers-real-world-demo
81
+ ```
82
+
83
+ 3. In the Hugging Face Space settings, add:
84
+
85
+ ```text
86
+ SYNLAYERS_MODEL_REPO=SynLayers/Bbox-caption-8b
87
+ ```
88
+
89
+ Then the public Space interface will:
90
+
91
+ - accept a user image upload
92
+ - load the bbox-caption model from the uploaded model repo
93
+ - download the SynLayers decomposition assets from that same repo
94
+ - run the one-step decomposition pipeline
95
+ - return the bbox visualization, merged output, per-layer outputs, and a downloadable archive
96
+
97
+ ## Upload Bundle
98
+
99
+ To upload the minimal used demo bundle to a Hugging Face repo:
100
+
101
+ ```bash
102
+ python demo/upload_used_bundle_to_hf.py \
103
+ --repo-id SynLayers/Bbox-caption-8b
104
+ ```
105
+
106
+ This uploads:
107
+
108
+ - the used `demo`, `infer`, `models`, and `tools` Python files
109
+ - `demo/upload_used_bundle_to_hf.py`
110
+ - `demo/publish_space.py`
111
+ - `infer/infer.yaml`
112
+ - `environment.yml`
113
+ - `ckpt/trans_vae/0008000.pt`
114
+ - `ckpt/pre_trained_LoRA/pytorch_lora_weights.safetensors`
115
+ - `ckpt/prism_ft_LoRA/pytorch_lora_weights.safetensors`
116
+ - `SynLayers_ckpt/step_120000`
117
+ - `SynLayers_checkpoints/FLUX.1-dev`
118
+ - `SynLayers_checkpoints/FLUX.1-dev-Controlnet-Inpainting-Alpha`
119
+
120
+ ## Fixed Prompt
121
+
122
+ The bbox detector always uses the fixed prompt defined in:
123
+
124
+ - `demo/infer/run_caption_bbox_infer.py`
125
+
126
+ No extra user text prompt is required.