luh1124 commited on
Commit
1c6f0e7
·
1 Parent(s): 69b0740

feat(space): HyShape-only ZeroGPU diagnostic app as Space entry

Browse files

- Add app_hyshape.py (Hunyuan mesh only, early GPU callback logs)\n- Point README app_file at app_hyshape.py; document switch back to app.py\n- Add plan doc and AST architecture test

Made-with: Cursor

README.md CHANGED
@@ -6,7 +6,7 @@ colorTo: indigo
6
  sdk: gradio
7
  sdk_version: 6.9.0
8
  python_version: "3.10"
9
- app_file: app.py
10
  pinned: false
11
  license: apache-2.0
12
  short_description: "Relightable 3D from one image: SLAT, neural renderer, HDRI"
@@ -48,7 +48,8 @@ This repository combines:
48
 
49
  ## ZeroGPU Runtime Notes
50
 
51
- - The Space now keeps **page-load image defaults** and **HDRI preview** on lightweight CPU paths so the first page visit does not spend the first ZeroGPU allocation on model initialization.
 
52
  - Runtime loading is split by responsibility: **Hunyuan3D geometry** is loaded only for mesh generation, **NeAR relighting** is loaded only for SLaT/render/export, and **gsplat warmup** is delayed until the first real render.
53
  - Binary wheels and mirrored auxiliary assets are stored separately:
54
  - **`luh0502/near-wheels`**: prebuilt wheels such as `nvdiffrast` and optional future `gsplat` wheels
@@ -100,7 +101,8 @@ Compared with a standard image-to-3D pipeline, NeAR focuses on:
100
  Key files and directories:
101
 
102
  - `example.py` — minimal end-to-end inference example.
103
- - `app.py` — Gradio / Hugging Face Space entrypoint.
 
104
  - `setup.sh` — environment setup helper.
105
  - `checkpoints/` — local pipeline configuration and model checkpoints.
106
  - `trellis/pipelines/near_image_to_relightable_3d.py` — main NeAR inference pipeline.
 
6
  sdk: gradio
7
  sdk_version: 6.9.0
8
  python_version: "3.10"
9
+ app_file: app_hyshape.py
10
  pinned: false
11
  license: apache-2.0
12
  short_description: "Relightable 3D from one image: SLAT, neural renderer, HDRI"
 
48
 
49
  ## ZeroGPU Runtime Notes
50
 
51
+ - The Space is temporarily pointed at **`app_hyshape.py`** (Hunyuan geometry only) for isolating ZeroGPU init issues. Restore **`app_file: app.py`** in the YAML header above when you want the full NeAR UI again.
52
+ - The full `app.py` Space keeps **page-load image defaults** and **HDRI preview** on lightweight CPU paths so the first page visit does not spend the first ZeroGPU allocation on model initialization.
53
  - Runtime loading is split by responsibility: **Hunyuan3D geometry** is loaded only for mesh generation, **NeAR relighting** is loaded only for SLaT/render/export, and **gsplat warmup** is delayed until the first real render.
54
  - Binary wheels and mirrored auxiliary assets are stored separately:
55
  - **`luh0502/near-wheels`**: prebuilt wheels such as `nvdiffrast` and optional future `gsplat` wheels
 
101
  Key files and directories:
102
 
103
  - `example.py` — minimal end-to-end inference example.
104
+ - `app_hyshape.py` — current Hugging Face Space entrypoint (HyShape-only ZeroGPU diagnostic).
105
+ - `app.py` — full NeAR Gradio app; set `app_file: app.py` in this README to switch back.
106
  - `setup.sh` — environment setup helper.
107
  - `checkpoints/` — local pipeline configuration and model checkpoints.
108
  - `trellis/pipelines/near_image_to_relightable_3d.py` — main NeAR inference pipeline.
app_hyshape.py ADDED
@@ -0,0 +1,373 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import shutil
4
+ import threading
5
+ import time
6
+ from pathlib import Path
7
+ from typing import Any, Optional
8
+
9
+ import gradio as gr
10
+ import numpy as np
11
+ import torch
12
+ from PIL import Image
13
+
14
+ # transformers/huggingface_hub authenticate gated repos via HF_TOKEN (or HUGGING_FACE_HUB_TOKEN).
15
+ if not os.environ.get("HF_TOKEN") and not os.environ.get("HUGGING_FACE_HUB_TOKEN"):
16
+ _hub_tok = (os.environ.get("near") or os.environ.get("NEAR") or "").strip()
17
+ if _hub_tok:
18
+ os.environ["HF_TOKEN"] = _hub_tok
19
+ print(
20
+ "[HyShape] HF_TOKEN unset; using Space secret 'near' as HF_TOKEN.",
21
+ flush=True,
22
+ )
23
+
24
+ # ZeroGPU variables must be clamped before importing spaces.
25
+ try:
26
+ _raw_zerogpu_cap = int(os.environ.get("NEAR_ZEROGPU_HF_CEILING_S", "90"))
27
+ except ValueError:
28
+ _raw_zerogpu_cap = 90
29
+ _ZEROGPU_ENV_CAP_S = min(max(15, _raw_zerogpu_cap), 120)
30
+ for _env_key in ("NEAR_ZEROGPU_MAX_SECONDS", "NEAR_ZEROGPU_DURATION_CAP"):
31
+ if _env_key in os.environ:
32
+ try:
33
+ if int(os.environ[_env_key]) > _ZEROGPU_ENV_CAP_S:
34
+ os.environ[_env_key] = str(_ZEROGPU_ENV_CAP_S)
35
+ except ValueError:
36
+ pass
37
+ print(
38
+ f"[HyShape] ZeroGPU cap set to {_ZEROGPU_ENV_CAP_S}s. "
39
+ "Callbacks use plain spaces.GPU.",
40
+ flush=True,
41
+ )
42
+
43
+ try:
44
+ import spaces
45
+ except ImportError:
46
+ spaces = None
47
+
48
+ sys.path.insert(0, "./hy3dshape")
49
+ os.environ.setdefault("ATTN_BACKEND", "xformers")
50
+ os.environ.setdefault("SPCONV_ALGO", "native")
51
+ os.environ.setdefault("TORCH_CUDA_ARCH_LIST", "7.5;8.0;8.6;8.9;9.0")
52
+
53
+ GPU = spaces.GPU if spaces is not None else (lambda f: f)
54
+
55
+ APP_DIR = Path(__file__).resolve().parent
56
+ CACHE_DIR = APP_DIR / "tmp_gradio_hyshape"
57
+ CACHE_DIR.mkdir(exist_ok=True)
58
+ DEFAULT_IMAGE = APP_DIR / "assets/example_image/T.png"
59
+ DEFAULT_PORT = 7860
60
+
61
+ _SESSION_LAST_TOUCH: dict[str, float] = {}
62
+ _SESSION_TOUCH_LOCK = threading.Lock()
63
+ _MODEL_LOCK = threading.Lock()
64
+ _LIGHT_PREPROCESS_LOCK = threading.Lock()
65
+ _LIGHT_PREPROCESSOR: Any | None = None
66
+ GEOMETRY_PIPELINE: Any | None = None
67
+
68
+
69
+ def _path_is_git_lfs_pointer(path: Path) -> bool:
70
+ try:
71
+ if not path.is_file():
72
+ return False
73
+ if path.stat().st_size > 512:
74
+ return False
75
+ head = path.read_bytes()[:120]
76
+ return head.startswith(b"version https://git-lfs.github.com/spec/v1")
77
+ except OSError:
78
+ return False
79
+
80
+
81
+ def _session_touch(session_id: str) -> None:
82
+ with _SESSION_TOUCH_LOCK:
83
+ _SESSION_LAST_TOUCH[session_id] = time.time()
84
+
85
+
86
+ def _session_forget(session_id: str) -> None:
87
+ with _SESSION_TOUCH_LOCK:
88
+ _SESSION_LAST_TOUCH.pop(session_id, None)
89
+
90
+
91
+ def ensure_session_dir(req: Optional[gr.Request]) -> Path:
92
+ session_id = getattr(req, "session_hash", None) or "shared"
93
+ session_dir = CACHE_DIR / str(session_id)
94
+ session_dir.mkdir(parents=True, exist_ok=True)
95
+ _session_touch(str(session_id))
96
+ return session_dir
97
+
98
+
99
+ def clear_session_dir(req: Optional[gr.Request]) -> str:
100
+ session_dir = ensure_session_dir(req)
101
+ shutil.rmtree(session_dir, ignore_errors=True)
102
+ session_dir.mkdir(parents=True, exist_ok=True)
103
+ if torch.cuda.is_available():
104
+ torch.cuda.empty_cache()
105
+ return "HyShape cache cleared."
106
+
107
+
108
+ def end_session(req: gr.Request) -> None:
109
+ session_id = getattr(req, "session_hash", None) or "shared"
110
+ shutil.rmtree(CACHE_DIR / str(session_id), ignore_errors=True)
111
+ _session_forget(str(session_id))
112
+
113
+
114
+ def _runtime_device() -> str:
115
+ return "cuda" if torch.cuda.is_available() else "cpu"
116
+
117
+
118
+ def _ensure_rgba(image: Image.Image) -> Image.Image:
119
+ if image.mode == "RGBA":
120
+ return image
121
+ if image.mode == "RGB":
122
+ r, g, b = image.split()
123
+ a = Image.new("L", image.size, 255)
124
+ return Image.merge("RGBA", (r, g, b, a))
125
+ return image.convert("RGBA")
126
+
127
+
128
+ def _flatten_rgba_on_matte(image: Image.Image, matte_rgb: tuple[float, float, float]) -> Image.Image:
129
+ rgba = _ensure_rgba(image)
130
+ matte = tuple(int(round(channel * 255)) for channel in matte_rgb)
131
+ background = Image.new("RGBA", rgba.size, matte + (255,))
132
+ return Image.alpha_composite(background, rgba).convert("RGB")
133
+
134
+
135
+ def _get_light_image_preprocessor():
136
+ global _LIGHT_PREPROCESSOR
137
+ if _LIGHT_PREPROCESSOR is not None:
138
+ return _LIGHT_PREPROCESSOR
139
+ with _LIGHT_PREPROCESS_LOCK:
140
+ if _LIGHT_PREPROCESSOR is None:
141
+ from hy3dshape.rembg import BackgroundRemover # pyright: ignore[reportMissingImports]
142
+
143
+ _LIGHT_PREPROCESSOR = BackgroundRemover()
144
+ print("[HyShape] Background remover ready.", flush=True)
145
+ return _LIGHT_PREPROCESSOR
146
+
147
+
148
+ def _preprocess_image_rgba_light(input_image: Image.Image) -> Image.Image:
149
+ image = _ensure_rgba(input_image)
150
+ has_alpha = False
151
+ if image.mode == "RGBA":
152
+ alpha = np.array(image)[:, :, 3]
153
+ has_alpha = not np.all(alpha == 255)
154
+
155
+ if has_alpha:
156
+ output = image
157
+ else:
158
+ rgb = image.convert("RGB")
159
+ max_size = max(rgb.size)
160
+ scale = min(1, 1024 / max_size)
161
+ if scale < 1:
162
+ rgb = rgb.resize(
163
+ (int(rgb.width * scale), int(rgb.height * scale)),
164
+ Image.Resampling.LANCZOS,
165
+ )
166
+ output = _get_light_image_preprocessor()(rgb)
167
+
168
+ if output.mode != "RGBA":
169
+ output = output.convert("RGBA")
170
+ output_np = np.array(output)
171
+ alpha = output_np[:, :, 3]
172
+ bbox = np.argwhere(alpha > 0.8 * 255)
173
+ if bbox.size == 0:
174
+ return output.resize((518, 518), Image.Resampling.LANCZOS).convert("RGBA")
175
+
176
+ crop_bbox = (
177
+ int(np.min(bbox[:, 1])),
178
+ int(np.min(bbox[:, 0])),
179
+ int(np.max(bbox[:, 1])),
180
+ int(np.max(bbox[:, 0])),
181
+ )
182
+ center = ((crop_bbox[0] + crop_bbox[2]) / 2, (crop_bbox[1] + crop_bbox[3]) / 2)
183
+ size = max(crop_bbox[2] - crop_bbox[0], crop_bbox[3] - crop_bbox[1])
184
+ size = int(size * 1.2)
185
+ padded_bbox = (
186
+ center[0] - size // 2,
187
+ center[1] - size // 2,
188
+ center[0] + size // 2,
189
+ center[1] + size // 2,
190
+ )
191
+ return output.crop(padded_bbox).resize((518, 518), Image.Resampling.LANCZOS).convert("RGBA")
192
+
193
+
194
+ def preprocess_image_only(image_input: Optional[Image.Image]):
195
+ if image_input is None:
196
+ return None, None, "Upload an input image."
197
+ started_at = time.time()
198
+ rgba = _preprocess_image_rgba_light(image_input)
199
+ elapsed = time.time() - started_at
200
+ print(f"[HyShape] lightweight preprocess done in {elapsed:.1f}s", flush=True)
201
+ return rgba, rgba, f"Image preprocessed in {elapsed:.1f}s."
202
+
203
+
204
+ def ensure_geometry_pipeline() -> Any:
205
+ global GEOMETRY_PIPELINE
206
+ if GEOMETRY_PIPELINE is not None:
207
+ return GEOMETRY_PIPELINE
208
+
209
+ with _MODEL_LOCK:
210
+ if GEOMETRY_PIPELINE is not None:
211
+ return GEOMETRY_PIPELINE
212
+
213
+ from hy3dshape.pipelines import Hunyuan3DDiTFlowMatchingPipeline # pyright: ignore[reportMissingImports]
214
+
215
+ device = _runtime_device()
216
+ hy_id = os.environ.get("NEAR_HUNYUAN_PRETRAINED", "tencent/Hunyuan3D-2.1")
217
+ started_at = time.time()
218
+ print(f"[HyShape] Loading geometry pipeline from {hy_id!r}...", flush=True)
219
+ geometry_pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained(hy_id, device="cpu")
220
+ print(f"[HyShape] from_pretrained done in {time.time() - started_at:.1f}s", flush=True)
221
+ move_started_at = time.time()
222
+ geometry_pipeline.to(device)
223
+ print(f"[HyShape] moved geometry pipeline to {device} in {time.time() - move_started_at:.1f}s", flush=True)
224
+ GEOMETRY_PIPELINE = geometry_pipeline
225
+ print(f"[HyShape] geometry pipeline ready in {time.time() - started_at:.1f}s total", flush=True)
226
+ return GEOMETRY_PIPELINE
227
+
228
+
229
+ @GPU
230
+ @torch.inference_mode()
231
+ def generate_mesh(
232
+ image_input: Optional[Image.Image],
233
+ req: gr.Request,
234
+ progress=gr.Progress(track_tqdm=True),
235
+ ):
236
+ started_at = time.time()
237
+ print(
238
+ "[HyShape] generate_mesh callback entered "
239
+ f"(cuda_available={torch.cuda.is_available()}, session={getattr(req, 'session_hash', 'shared')})",
240
+ flush=True,
241
+ )
242
+ progress(0.05, desc="Entered GPU callback")
243
+
244
+ if image_input is None:
245
+ raise gr.Error("Please upload an input image.")
246
+
247
+ session_dir = ensure_session_dir(req)
248
+ rgba = _ensure_rgba(image_input)
249
+ if rgba.size != (518, 518):
250
+ rgba = _preprocess_image_rgba_light(rgba)
251
+
252
+ rgba_path = session_dir / "input_preprocessed_rgba.png"
253
+ rgba.save(rgba_path)
254
+ mesh_rgb = _flatten_rgba_on_matte(rgba, (1.0, 1.0, 1.0))
255
+ mesh_rgb.save(session_dir / "input_processed.png")
256
+
257
+ progress(0.2, desc="Loading Hunyuan geometry")
258
+ geometry_pipeline = ensure_geometry_pipeline()
259
+
260
+ progress(0.6, desc="Generating geometry")
261
+ mesh_started_at = time.time()
262
+ mesh = geometry_pipeline(image=mesh_rgb)[0]
263
+ print(f"[HyShape] geometry generation done in {time.time() - mesh_started_at:.1f}s", flush=True)
264
+
265
+ mesh_path = session_dir / "hyshape_mesh.glb"
266
+ mesh.export(mesh_path)
267
+ total_elapsed = time.time() - started_at
268
+ print(f"[HyShape] generate_mesh total: {total_elapsed:.1f}s", flush=True)
269
+ return rgba, str(mesh_path), f"HyShape mesh ready in {total_elapsed:.1f}s."
270
+
271
+
272
+ def build_app() -> gr.Blocks:
273
+ example_images = [
274
+ [str(path)]
275
+ for path in sorted((APP_DIR / "assets/example_image").glob("*.png"))
276
+ if not _path_is_git_lfs_pointer(path)
277
+ ]
278
+
279
+ with gr.Blocks(title="HyShape ZeroGPU Probe", delete_cache=None) as demo:
280
+ gr.Markdown(
281
+ """
282
+ ## HyShape ZeroGPU Probe
283
+ This diagnostic app isolates the Hunyuan geometry path.
284
+
285
+ - Upload an image or click an example.
286
+ - The upload path only performs lightweight preprocessing.
287
+ - `Generate Mesh` is the only GPU callback and does not touch NeAR or gsplat.
288
+ """
289
+ )
290
+
291
+ with gr.Row(equal_height=False):
292
+ with gr.Column(scale=1, min_width=360):
293
+ image_input = gr.Image(
294
+ label="Input Image",
295
+ type="pil",
296
+ image_mode="RGBA",
297
+ value=str(DEFAULT_IMAGE) if DEFAULT_IMAGE.exists() else None,
298
+ height=400,
299
+ )
300
+ mesh_button = gr.Button("Generate Mesh", variant="primary")
301
+ clear_button = gr.Button("Clear Cache", variant="secondary")
302
+
303
+ if example_images:
304
+ gr.Examples(
305
+ examples=example_images,
306
+ inputs=[image_input],
307
+ label="Example Images",
308
+ )
309
+
310
+ with gr.Column(scale=2, min_width=560):
311
+ status_md = gr.Markdown("Ready.")
312
+ processed_preview = gr.Image(
313
+ label="Preprocessed RGBA",
314
+ interactive=False,
315
+ height=320,
316
+ )
317
+ mesh_viewer = gr.Model3D(
318
+ label="Generated Mesh",
319
+ interactive=False,
320
+ height=520,
321
+ )
322
+
323
+ demo.unload(end_session)
324
+
325
+ image_input.upload(
326
+ preprocess_image_only,
327
+ inputs=[image_input],
328
+ outputs=[image_input, processed_preview, status_md],
329
+ )
330
+
331
+ mesh_button.click(
332
+ generate_mesh,
333
+ inputs=[image_input],
334
+ outputs=[processed_preview, mesh_viewer, status_md],
335
+ )
336
+
337
+ clear_button.click(
338
+ clear_session_dir,
339
+ outputs=[status_md],
340
+ ).then(
341
+ lambda: (None, None),
342
+ outputs=[processed_preview, mesh_viewer],
343
+ )
344
+
345
+ return demo
346
+
347
+
348
+ demo = build_app()
349
+ demo.queue(max_size=2)
350
+
351
+
352
+ if __name__ == "__main__":
353
+ import argparse
354
+
355
+ parser = argparse.ArgumentParser()
356
+ parser.add_argument(
357
+ "--host",
358
+ type=str,
359
+ default=os.environ.get("GRADIO_SERVER_NAME", "0.0.0.0"),
360
+ )
361
+ parser.add_argument(
362
+ "--port",
363
+ type=int,
364
+ default=int(os.environ.get("PORT", os.environ.get("GRADIO_SERVER_PORT", str(DEFAULT_PORT)))),
365
+ )
366
+ parser.add_argument("--share", action="store_true")
367
+ args = parser.parse_args()
368
+
369
+ demo.launch(
370
+ server_name=args.host,
371
+ server_port=args.port,
372
+ share=args.share,
373
+ )
docs/plans/2026-03-25-hyshape-zerogpu-diagnostic.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HyShape ZeroGPU Diagnostic App Implementation Plan
2
+
3
+ > **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
4
+
5
+ **Goal:** Add an alternate `app_hyshape.py` entrypoint that isolates the Hunyuan geometry path so ZeroGPU failures can be distinguished from NeAR and gsplat issues.
6
+
7
+ **Architecture:** Build a standalone minimal Gradio app instead of importing the full `app.py` stack. Reuse the same lightweight image preprocessing behavior and lazy geometry loading pattern, but exclude SLaT generation, NeAR rendering, HDRI processing, and gsplat warmup. Add very early callback logging so Space logs show whether execution reached Python code before any failure.
8
+
9
+ **Tech Stack:** Python 3.10, Gradio, `spaces.GPU`, Pillow, NumPy, PyTorch, `hy3dshape`
10
+
11
+ ---
12
+
13
+ ### Task 1: Create the standalone diagnostic entrypoint
14
+
15
+ **Files:**
16
+ - Create: `app_hyshape.py`
17
+ - Check: `app.py`
18
+
19
+ **Step 1: Define the minimal runtime surface**
20
+
21
+ Describe a standalone app that only supports:
22
+ - image upload and lightweight preprocessing
23
+ - lazy Hunyuan geometry pipeline loading
24
+ - mesh generation and export to a session cache
25
+ - status logging focused on ZeroGPU callback entry and model load timing
26
+
27
+ **Step 2: Keep imports and startup intentionally small**
28
+
29
+ Ensure the new entrypoint:
30
+ - does not import `NeARImageToRelightable3DPipeline`
31
+ - does not reference HDRI, gsplat, or SLaT code paths
32
+ - defers heavy geometry imports until the loader runs
33
+
34
+ **Step 3: Add the minimal UI**
35
+
36
+ Include:
37
+ - one image input
38
+ - one preprocess preview
39
+ - one mesh viewer
40
+ - one generate button
41
+ - one clear-cache button
42
+ - optional example image loading if example PNGs exist
43
+
44
+ **Step 4: Add callback-entry diagnostics**
45
+
46
+ Log, at minimum:
47
+ - when the GPU callback is entered
48
+ - whether CUDA is available at callback entry
49
+ - geometry loader start/end timing
50
+ - mesh generation total timing
51
+
52
+ ### Task 2: Add a regression architecture test
53
+
54
+ **Files:**
55
+ - Create: `tests/test_app_hyshape_architecture.py`
56
+
57
+ **Step 1: Add static checks**
58
+
59
+ Verify the diagnostic entrypoint:
60
+ - only uses the geometry loader in `generate_mesh`
61
+ - does not reference `ensure_near_pipeline`
62
+ - does not reference `ensure_gsplat_ready`
63
+ - contains the early callback-entry log marker
64
+
65
+ **Step 2: Keep the test lightweight**
66
+
67
+ Use AST or source-level assertions only so the test does not import heavy ML dependencies.
68
+
69
+ ### Task 3: Verify the diagnostic entrypoint locally
70
+
71
+ **Files:**
72
+ - Check: `app_hyshape.py`
73
+ - Check: `tests/test_app_hyshape_architecture.py`
74
+
75
+ **Step 1: Run the focused test**
76
+
77
+ Run the architecture test file directly with Python's standard library test runner.
78
+
79
+ **Step 2: Run lint diagnostics**
80
+
81
+ Check the newly created files for obvious static-analysis regressions.
82
+
83
+ **Step 3: Prepare the Space switch instruction**
84
+
85
+ Record that Hugging Face Space can be pointed at the diagnostic app by changing `README.md` front matter from `app_file: app.py` to `app_file: app_hyshape.py`.
tests/test_app_hyshape_architecture.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import ast
4
+ import unittest
5
+ from pathlib import Path
6
+
7
+
8
+ APP_PATH = Path(__file__).resolve().parents[1] / "app_hyshape.py"
9
+
10
+
11
+ def _load_tree() -> ast.Module:
12
+ return ast.parse(APP_PATH.read_text(encoding="utf-8"))
13
+
14
+
15
+ def _get_function(tree: ast.Module, name: str) -> ast.FunctionDef:
16
+ for node in tree.body:
17
+ if isinstance(node, ast.FunctionDef) and node.name == name:
18
+ return node
19
+ raise AssertionError(f"Function {name!r} not found in app_hyshape.py")
20
+
21
+
22
+ def _called_names(function_node: ast.FunctionDef) -> set[str]:
23
+ names: set[str] = set()
24
+ for node in ast.walk(function_node):
25
+ if isinstance(node, ast.Call):
26
+ if isinstance(node.func, ast.Name):
27
+ names.add(node.func.id)
28
+ elif isinstance(node.func, ast.Attribute):
29
+ names.add(node.func.attr)
30
+ return names
31
+
32
+
33
+ class AppHyShapeArchitectureTests(unittest.TestCase):
34
+ def test_hyshape_app_does_not_reference_near_or_gsplat_paths(self) -> None:
35
+ source = APP_PATH.read_text(encoding="utf-8")
36
+
37
+ self.assertNotIn("NeARImageToRelightable3DPipeline", source)
38
+ self.assertNotIn("ensure_near_pipeline", source)
39
+ self.assertNotIn("ensure_gsplat_ready", source)
40
+
41
+ def test_generate_mesh_only_uses_geometry_loader(self) -> None:
42
+ generate_mesh = _get_function(_load_tree(), "generate_mesh")
43
+ called = _called_names(generate_mesh)
44
+
45
+ self.assertIn("ensure_geometry_pipeline", called)
46
+ self.assertNotIn("ensure_near_pipeline", called)
47
+ self.assertNotIn("ensure_gsplat_ready", called)
48
+
49
+ def test_generate_mesh_contains_early_callback_log_marker(self) -> None:
50
+ source = APP_PATH.read_text(encoding="utf-8")
51
+
52
+ self.assertIn("[HyShape] generate_mesh callback entered", source)
53
+
54
+
55
+ if __name__ == "__main__":
56
+ unittest.main()