SynLayers commited on
Commit
5737d45
·
verified ·
1 Parent(s): 5ed5a04

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +14 -101
README.md CHANGED
@@ -1,3 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # SynLayers Demo
2
 
3
  This folder now contains a unified real-world inference demo:
@@ -6,7 +20,6 @@ This folder now contains a unified real-world inference demo:
6
  2. `infer/infer.py` runs SynLayers decomposition with `infer/infer.yaml`.
7
  3. `demo/real_world_pipeline.py` stitches the two stages together for one uploaded image.
8
  4. `demo/app.py` provides a Gradio interface that can be used locally or adapted for a Hugging Face Space.
9
- 5. `demo/upload_used_bundle_to_hf.py` uploads only the Python/config files actually used by the demo, plus the selected runtime assets.
10
 
11
  ## Local Run
12
 
@@ -23,104 +36,4 @@ python demo/real_world_pipeline.py \
23
  --image "/path/to/your/image.png"
24
  ```
25
 
26
- ## Default Models
27
-
28
- The current local defaults are:
29
-
30
- - bbox-caption model:
31
- `/project/llmsvgen/share/data/kmw_layered_checkpoint/Bbox-caption-8b`
32
- - SynLayers base checkpoints:
33
- `/project/llmsvgen/share/data/kmw_layered_checkpoint/SynLayers_checkpoints`
34
- - SynLayers decomposition checkpoint:
35
- `/project/llmsvgen/share/data/kmw_layered_checkpoint/SynLayers_ckpt/step_120000`
36
- - base config:
37
- `infer/infer.yaml`
38
-
39
- ## Hugging Face Space Notes
40
-
41
- The Gradio app is ready for a Hugging Face Space.
42
- After you upload the model/runtime bundle to `SynLayers/Bbox-caption-8b`, the Space can download
43
- those uploaded assets automatically and use them directly.
44
-
45
- The app supports overriding the local defaults with environment variables:
46
-
47
- - `SYNLAYERS_MODEL_REPO`
48
- - `SYNLAYERS_BBOX_MODEL`
49
- - `SYNLAYERS_BASE_MODEL`
50
- - `SYNLAYERS_ADAPTER_MODEL`
51
- - `SYNLAYERS_TRANSP_VAE`
52
- - `SYNLAYERS_PRETRAINED_LORA`
53
- - `SYNLAYERS_ARTPLUS_LORA`
54
- - `SYNLAYERS_DECOMP_CKPT_ROOT`
55
- - `SYNLAYERS_REAL_CONFIG`
56
- - `SYNLAYERS_DEMO_WORK_DIR`
57
- - `SYNLAYERS_EXAMPLE_DIR`
58
-
59
- In practice, for a real Hugging Face Space deployment you will want to:
60
-
61
- 1. upload the required model/runtime assets to `SynLayers/Bbox-caption-8b`
62
- 2. create a Gradio Space repo, for example `SynLayers/synlayers-real-world-demo`
63
- 3. upload the Space scaffold with `demo/publish_space.py`
64
- 4. set `SYNLAYERS_MODEL_REPO=SynLayers/Bbox-caption-8b` in the Space settings
65
- 5. launch `app.py` as the Space entrypoint
66
-
67
- ### Public interface flow
68
-
69
- 1. Upload the model/runtime bundle:
70
-
71
- ```bash
72
- python demo/upload_used_bundle_to_hf.py \
73
- --repo-id SynLayers/Bbox-caption-8b
74
- ```
75
-
76
- 2. Create and upload the Space scaffold:
77
-
78
- ```bash
79
- python demo/publish_space.py \
80
- --repo-id SynLayers/synlayers-real-world-demo
81
- ```
82
-
83
- 3. In the Hugging Face Space settings, add:
84
-
85
- ```text
86
- SYNLAYERS_MODEL_REPO=SynLayers/Bbox-caption-8b
87
- ```
88
-
89
- Then the public Space interface will:
90
-
91
- - accept a user image upload
92
- - load the bbox-caption model from the uploaded model repo
93
- - download the SynLayers decomposition assets from that same repo
94
- - run the one-step decomposition pipeline
95
- - return the bbox visualization, merged output, per-layer outputs, and a downloadable archive
96
-
97
- ## Upload Bundle
98
-
99
- To upload the minimal used demo bundle to a Hugging Face repo:
100
-
101
- ```bash
102
- python demo/upload_used_bundle_to_hf.py \
103
- --repo-id SynLayers/Bbox-caption-8b
104
- ```
105
-
106
- This uploads:
107
-
108
- - the used `demo`, `infer`, `models`, and `tools` Python files
109
- - `demo/upload_used_bundle_to_hf.py`
110
- - `demo/publish_space.py`
111
- - `infer/infer.yaml`
112
- - `environment.yml`
113
- - `ckpt/trans_vae/0008000.pt`
114
- - `ckpt/pre_trained_LoRA/pytorch_lora_weights.safetensors`
115
- - `ckpt/prism_ft_LoRA/pytorch_lora_weights.safetensors`
116
- - `SynLayers_ckpt/step_120000`
117
- - `SynLayers_checkpoints/FLUX.1-dev`
118
- - `SynLayers_checkpoints/FLUX.1-dev-Controlnet-Inpainting-Alpha`
119
-
120
- ## Fixed Prompt
121
-
122
- The bbox detector always uses the fixed prompt defined in:
123
-
124
- - `demo/infer/run_caption_bbox_infer.py`
125
 
126
- No extra user text prompt is required.
 
1
+ ---
2
+ title: SynLayers
3
+ emoji: "🧩"
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ python_version: "3.10"
8
+ app_file: app.py
9
+ suggested_hardware: a100-large
10
+ models:
11
+ - SynLayers/Bbox-caption-8b
12
+ pinned: false
13
+ ---
14
+
15
  # SynLayers Demo
16
 
17
  This folder now contains a unified real-world inference demo:
 
20
  2. `infer/infer.py` runs SynLayers decomposition with `infer/infer.yaml`.
21
  3. `demo/real_world_pipeline.py` stitches the two stages together for one uploaded image.
22
  4. `demo/app.py` provides a Gradio interface that can be used locally or adapted for a Hugging Face Space.
 
23
 
24
  ## Local Run
25
 
 
36
  --image "/path/to/your/image.png"
37
  ```
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39