fix(droplet): _build_chip_tensor handle 5D input from eo_chip_cache
Browse filesThe HF Space's eo_chip_cache hands the droplet a chip already shaped
(B, C, T, H, W) — see app/context/eo_chip_cache.py:_to_terramind_tensors.
The droplet's _build_chip_tensor assumed 3-D (C, H, W) input, called
.unsqueeze(1).repeat(1, n_timesteps, 1, 1) which raised:
RuntimeError: Number of dimensions of repeat dims can not be smaller
than number of dimensions of tensor
That's why every terramind_lulc / terramind_buildings request returned
non-ok at /v1/terramind, why the HF trace showed terratorch as the
apparent cause (after fall-through), and why /healthz never had any
terramind models loaded — the route never made it past chip prep.
Make _build_chip_tensor idempotent: pass 5-D through, add batch to 4-D,
expand to T=4 + batch for legacy 3-D. Hot-patched on the droplet
(docker cp + docker restart riprap-models); committing source so the
next bring-up via scripts/deploy_droplet.sh inherits the fix.
Verified via live HF Space probe: terramind_lulc returns
dominant_class='Built' (65%), terramind_buildings returns
pct_buildings=0.6%. The fine-tuned NYC adapters are running.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
@@ -300,11 +300,23 @@ class TerramindIn(BaseModel):
|
|
| 300 |
|
| 301 |
|
| 302 |
def _build_chip_tensor(np_arr, n_timesteps: int = 4):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 303 |
import torch
|
| 304 |
-
t = torch.from_numpy(np_arr).float()
|
| 305 |
-
if t.
|
| 306 |
-
|
| 307 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 308 |
|
| 309 |
|
| 310 |
def _terramind_inference(payload: TerramindIn) -> dict[str, Any]:
|
|
|
|
| 300 |
|
| 301 |
|
| 302 |
def _build_chip_tensor(np_arr, n_timesteps: int = 4):
|
| 303 |
+
"""Normalize any incoming chip shape into TerraMind's expected
|
| 304 |
+
(B, C, T, H, W). The HF Space's eo_chip_cache hands us a chip that
|
| 305 |
+
is already (B, C, T, H, W) 5-D — pass through. Older callers that
|
| 306 |
+
send a single-timestep (C, H, W) get expanded to T=4 by repetition;
|
| 307 |
+
a (C, T, H, W) gets just the batch dim added."""
|
| 308 |
import torch
|
| 309 |
+
t = torch.from_numpy(np_arr).float()
|
| 310 |
+
if t.ndim == 5:
|
| 311 |
+
return t # (B, C, T, H, W)
|
| 312 |
+
if t.ndim == 4:
|
| 313 |
+
return t.unsqueeze(0) # (C, T, H, W) -> (1, C, T, H, W)
|
| 314 |
+
if t.ndim == 3:
|
| 315 |
+
t = t.unsqueeze(1) # (C, H, W) -> (C, 1, H, W)
|
| 316 |
+
if t.shape[1] == 1:
|
| 317 |
+
t = t.repeat(1, n_timesteps, 1, 1) # repeat single timestep
|
| 318 |
+
return t.unsqueeze(0) # add batch dim
|
| 319 |
+
raise ValueError(f"unexpected chip shape {tuple(t.shape)}")
|
| 320 |
|
| 321 |
|
| 322 |
def _terramind_inference(payload: TerramindIn) -> dict[str, Any]:
|