Add checkpoint helpers + SFT hub push + GRPO resume
Browse files- README.md +44 -4
- physix/training/checkpoints.py +214 -0
- physix/training/loop.py +258 -23
- physix/training/sft.py +49 -0
- pyproject.toml +5 -0
README.md
CHANGED
|
@@ -1,13 +1,53 @@
|
|
| 1 |
# PhysiX-Live
|
| 2 |
|
| 3 |
-
**One-line pitch:** an OpenEnv RL environment where a small
|
| 4 |
discovers equations of motion from trajectory data plus a one-sentence English hint —
|
| 5 |
verifier is `scipy.integrate.odeint` plus per-step R², no LLM-as-judge in the reward loop.
|
| 6 |
|
| 7 |
A submission for the **OpenEnv hackathon** (Apr 2026). The deliverables are: a clean
|
| 8 |
-
OpenEnv-compatible env, a TRL+Unsloth+GRPO training pipeline targeting Qwen2.5
|
| 9 |
-
LoRA
|
| 10 |
-
for the trained vs. untrained model, and a recording script for pre-baked
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
---
|
| 13 |
|
|
|
|
| 1 |
# PhysiX-Live
|
| 2 |
|
| 3 |
+
**One-line pitch:** an OpenEnv RL environment where a small language model iteratively
|
| 4 |
discovers equations of motion from trajectory data plus a one-sentence English hint —
|
| 5 |
verifier is `scipy.integrate.odeint` plus per-step R², no LLM-as-judge in the reward loop.
|
| 6 |
|
| 7 |
A submission for the **OpenEnv hackathon** (Apr 2026). The deliverables are: a clean
|
| 8 |
+
OpenEnv-compatible env, a TRL+Unsloth+GRPO training pipeline targeting Qwen2.5 (1.5B / 3B
|
| 9 |
+
profiles) with LoRA, a React + TypeScript + Tailwind demo UI that animates trajectories
|
| 10 |
+
side-by-side for the trained vs. untrained model, and a recording script for pre-baked
|
| 11 |
+
demo episodes.
|
| 12 |
+
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
## Deliverables
|
| 16 |
+
|
| 17 |
+
| Deliverable | Where |
|
| 18 |
+
| ---------------------------- | ---------------------------------------------------------------------------------- |
|
| 19 |
+
| **Public HF Space (live demo)** | https://huggingface.co/spaces/Pratyush-01/physix-live |
|
| 20 |
+
| **Training driver script** | [`physix-train/job_train.py`](../physix-train/job_train.py) — PEP 723 inline-deps UV script, runs end-to-end on `hf jobs uv run` |
|
| 21 |
+
| **GRPO training loop** | [`physix/training/loop.py`](physix/training/loop.py) — Unsloth + TRL GRPOTrainer |
|
| 22 |
+
| **SFT warm-start** | [`physix/training/sft.py`](physix/training/sft.py) |
|
| 23 |
+
| **Trained adapters (Hub)** | [`Pratyush-01/physix-3b-rl`](https://huggingface.co/Pratyush-01/physix-3b-rl) |
|
| 24 |
+
| **Mid-run checkpoints** | [`Pratyush-01/physix-3b-rl-ckpt`](https://huggingface.co/Pratyush-01/physix-3b-rl-ckpt) |
|
| 25 |
+
| **W&B project** | https://wandb.ai/pratyush01/physix-live |
|
| 26 |
+
| **Writeup** | [`docs/writeup.md`](docs/writeup.md) |
|
| 27 |
+
|
| 28 |
+
## Training curves
|
| 29 |
+
|
| 30 |
+
Both curves are auto-generated at end of every GRPO run by
|
| 31 |
+
`physix.training.loop._render_training_curves` and committed to the repo at
|
| 32 |
+
`docs/plots/`. The interpretation rules:
|
| 33 |
+
|
| 34 |
+
- **`train/loss`** is the GRPO surrogate (advantage-weighted log-prob + β·KL).
|
| 35 |
+
Should trend **down** as advantages get exploited. (Per
|
| 36 |
+
[TRL docs](https://huggingface.co/docs/trl/main/logging) — this is the full
|
| 37 |
+
surrogate, not just the KL term.)
|
| 38 |
+
- **`train/reward`** is mean total reward across rollouts. Should trend **up**.
|
| 39 |
+
This is the headline curve.
|
| 40 |
+
- **Per-component reward** breaks `train/reward` into the 5 reward functions
|
| 41 |
+
(`match`, `match_dense`, `correctness`, `simplicity`, `format`). Used to spot
|
| 42 |
+
reward hacking — e.g. `simplicity` rising while `match` regresses.
|
| 43 |
+
|
| 44 |
+
| Loss (down is good) | Reward (up is good) |
|
| 45 |
+
| --- | --- |
|
| 46 |
+
|  |  |
|
| 47 |
+
|
| 48 |
+
| Per-component reward (anti-hack diagnostic) |
|
| 49 |
+
| --- |
|
| 50 |
+
|  |
|
| 51 |
|
| 52 |
---
|
| 53 |
|
physix/training/checkpoints.py
ADDED
|
@@ -0,0 +1,214 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Checkpoint push/pull helpers shared by SFT and GRPO.
|
| 2 |
+
|
| 3 |
+
Two responsibilities:
|
| 4 |
+
|
| 5 |
+
1. **Push a local checkpoint dir to a Hugging Face Hub repo as a subfolder**
|
| 6 |
+
(e.g. SFT writes to ``<repo>/sft/``, GRPO writes to ``<repo>/checkpoint-N/``).
|
| 7 |
+
Returns the resulting Hub revision SHA so the caller can pin it in W&B.
|
| 8 |
+
|
| 9 |
+
2. **Discover and download the latest GRPO checkpoint** from the same repo,
|
| 10 |
+
so a re-launched job can resume the same GRPO run rather than redoing SFT.
|
| 11 |
+
|
| 12 |
+
We deliberately do NOT push raw model weights into W&B Artifacts — they live
|
| 13 |
+
on the Hub. W&B gets a tiny **link-artifact** (one JSON metadata file with
|
| 14 |
+
the Hub repo + revision SHA + step), which is enough for `wandb artifact
|
| 15 |
+
get` to round-trip back to the Hub and download via `huggingface_hub`.
|
| 16 |
+
"""
|
| 17 |
+
from __future__ import annotations
|
| 18 |
+
|
| 19 |
+
import json
|
| 20 |
+
import logging
|
| 21 |
+
import os
|
| 22 |
+
import re
|
| 23 |
+
import tempfile
|
| 24 |
+
from dataclasses import dataclass
|
| 25 |
+
from pathlib import Path
|
| 26 |
+
from typing import Optional
|
| 27 |
+
|
| 28 |
+
_log = logging.getLogger(__name__)
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
# Subfolder names on the Hub checkpoint repo. The SFT subfolder is fixed.
|
| 32 |
+
# GRPO checkpoint subfolders follow Trainer's "checkpoint-{step}" convention.
|
| 33 |
+
SFT_SUBFOLDER = "sft"
|
| 34 |
+
GRPO_CHECKPOINT_RE = re.compile(r"^checkpoint-(\d+)$")
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
@dataclass
|
| 38 |
+
class CheckpointHandle:
|
| 39 |
+
"""A pointer to a checkpoint on the Hub.
|
| 40 |
+
|
| 41 |
+
``revision`` is a commit SHA (not a branch) so the artifact is
|
| 42 |
+
immutable — re-pushes to the same subfolder won't change what we
|
| 43 |
+
resume from.
|
| 44 |
+
"""
|
| 45 |
+
repo_id: str
|
| 46 |
+
subfolder: str
|
| 47 |
+
revision: str
|
| 48 |
+
step: Optional[int] = None # populated for GRPO checkpoint-N
|
| 49 |
+
|
| 50 |
+
@property
|
| 51 |
+
def hub_url(self) -> str:
|
| 52 |
+
return f"https://huggingface.co/{self.repo_id}/tree/{self.revision}/{self.subfolder}"
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
def push_checkpoint_to_hub(
|
| 56 |
+
local_dir: str | os.PathLike,
|
| 57 |
+
repo_id: str,
|
| 58 |
+
subfolder: str,
|
| 59 |
+
*,
|
| 60 |
+
commit_message: str,
|
| 61 |
+
token: Optional[str] = None,
|
| 62 |
+
) -> CheckpointHandle:
|
| 63 |
+
"""Upload ``local_dir`` to ``repo_id/<subfolder>/`` and return a pinned handle.
|
| 64 |
+
|
| 65 |
+
Raises if the repo can't be created or upload fails — the caller decides
|
| 66 |
+
whether to swallow the exception.
|
| 67 |
+
"""
|
| 68 |
+
from huggingface_hub import HfApi, create_repo
|
| 69 |
+
|
| 70 |
+
local = Path(local_dir)
|
| 71 |
+
if not local.is_dir():
|
| 72 |
+
raise FileNotFoundError(f"checkpoint dir does not exist: {local}")
|
| 73 |
+
|
| 74 |
+
api = HfApi(token=token)
|
| 75 |
+
create_repo(repo_id, exist_ok=True, repo_type="model", token=token)
|
| 76 |
+
|
| 77 |
+
_log.info("Uploading %s -> %s/%s", local, repo_id, subfolder)
|
| 78 |
+
commit = api.upload_folder(
|
| 79 |
+
folder_path=str(local),
|
| 80 |
+
repo_id=repo_id,
|
| 81 |
+
path_in_repo=subfolder,
|
| 82 |
+
commit_message=commit_message,
|
| 83 |
+
token=token,
|
| 84 |
+
)
|
| 85 |
+
revision = commit.oid if hasattr(commit, "oid") else str(commit)
|
| 86 |
+
_log.info("Push complete; revision=%s", revision)
|
| 87 |
+
return CheckpointHandle(
|
| 88 |
+
repo_id=repo_id,
|
| 89 |
+
subfolder=subfolder,
|
| 90 |
+
revision=revision,
|
| 91 |
+
)
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
def find_latest_grpo_checkpoint(
|
| 95 |
+
repo_id: str,
|
| 96 |
+
*,
|
| 97 |
+
token: Optional[str] = None,
|
| 98 |
+
) -> Optional[CheckpointHandle]:
|
| 99 |
+
"""Return the highest-step ``checkpoint-N/`` folder on the repo, or None.
|
| 100 |
+
|
| 101 |
+
Reads the *current* main revision (so concurrent pushes are race-free
|
| 102 |
+
for our purposes — we never need to resume from a half-finished push).
|
| 103 |
+
"""
|
| 104 |
+
from huggingface_hub import HfApi
|
| 105 |
+
from huggingface_hub.utils import RepositoryNotFoundError, RevisionNotFoundError
|
| 106 |
+
|
| 107 |
+
api = HfApi(token=token)
|
| 108 |
+
try:
|
| 109 |
+
files = api.list_repo_files(repo_id, repo_type="model", token=token)
|
| 110 |
+
except (RepositoryNotFoundError, RevisionNotFoundError):
|
| 111 |
+
return None
|
| 112 |
+
except Exception as exc: # noqa: BLE001
|
| 113 |
+
_log.warning("Could not list %s: %s", repo_id, exc)
|
| 114 |
+
return None
|
| 115 |
+
|
| 116 |
+
best_step = -1
|
| 117 |
+
best_subfolder: Optional[str] = None
|
| 118 |
+
for f in files:
|
| 119 |
+
# Top-level folder name is the first path component.
|
| 120 |
+
head = f.split("/", 1)[0]
|
| 121 |
+
m = GRPO_CHECKPOINT_RE.match(head)
|
| 122 |
+
if not m:
|
| 123 |
+
continue
|
| 124 |
+
step = int(m.group(1))
|
| 125 |
+
if step > best_step:
|
| 126 |
+
best_step = step
|
| 127 |
+
best_subfolder = head
|
| 128 |
+
|
| 129 |
+
if best_subfolder is None:
|
| 130 |
+
return None
|
| 131 |
+
|
| 132 |
+
# Pin the revision to the current main HEAD so concurrent commits don't
|
| 133 |
+
# surprise us partway through download.
|
| 134 |
+
info = api.repo_info(repo_id, repo_type="model", token=token)
|
| 135 |
+
revision = info.sha or "main"
|
| 136 |
+
return CheckpointHandle(
|
| 137 |
+
repo_id=repo_id,
|
| 138 |
+
subfolder=best_subfolder,
|
| 139 |
+
revision=revision,
|
| 140 |
+
step=best_step,
|
| 141 |
+
)
|
| 142 |
+
|
| 143 |
+
|
| 144 |
+
def download_checkpoint(
|
| 145 |
+
handle: CheckpointHandle,
|
| 146 |
+
local_dir: str | os.PathLike,
|
| 147 |
+
*,
|
| 148 |
+
token: Optional[str] = None,
|
| 149 |
+
) -> Path:
|
| 150 |
+
"""Download a Hub checkpoint subfolder to ``local_dir`` and return the path."""
|
| 151 |
+
from huggingface_hub import snapshot_download
|
| 152 |
+
|
| 153 |
+
target = Path(local_dir)
|
| 154 |
+
target.mkdir(parents=True, exist_ok=True)
|
| 155 |
+
|
| 156 |
+
snapshot_download(
|
| 157 |
+
repo_id=handle.repo_id,
|
| 158 |
+
revision=handle.revision,
|
| 159 |
+
allow_patterns=[f"{handle.subfolder}/*"],
|
| 160 |
+
local_dir=str(target),
|
| 161 |
+
token=token,
|
| 162 |
+
)
|
| 163 |
+
out = target / handle.subfolder
|
| 164 |
+
if not out.is_dir():
|
| 165 |
+
raise FileNotFoundError(
|
| 166 |
+
f"download succeeded but {out} is missing — check repo layout"
|
| 167 |
+
)
|
| 168 |
+
return out
|
| 169 |
+
|
| 170 |
+
|
| 171 |
+
def log_link_artifact_to_wandb(
|
| 172 |
+
handle: CheckpointHandle,
|
| 173 |
+
*,
|
| 174 |
+
artifact_name: str,
|
| 175 |
+
extra: Optional[dict] = None,
|
| 176 |
+
) -> None:
|
| 177 |
+
"""Log a tiny pointer-only artifact to the active W&B run.
|
| 178 |
+
|
| 179 |
+
The artifact contains a single ``checkpoint.json`` describing the Hub
|
| 180 |
+
location and revision. No model bytes are uploaded — this is purely an
|
| 181 |
+
addressable, versioned reference (~200 bytes) that makes the artifact
|
| 182 |
+
panel of the W&B run usable as a checkpoint registry.
|
| 183 |
+
"""
|
| 184 |
+
try:
|
| 185 |
+
import wandb
|
| 186 |
+
except ImportError:
|
| 187 |
+
return
|
| 188 |
+
if wandb.run is None:
|
| 189 |
+
return
|
| 190 |
+
|
| 191 |
+
payload = {
|
| 192 |
+
"repo_id": handle.repo_id,
|
| 193 |
+
"subfolder": handle.subfolder,
|
| 194 |
+
"revision": handle.revision,
|
| 195 |
+
"step": handle.step,
|
| 196 |
+
"hub_url": handle.hub_url,
|
| 197 |
+
}
|
| 198 |
+
if extra:
|
| 199 |
+
payload.update(extra)
|
| 200 |
+
|
| 201 |
+
with tempfile.TemporaryDirectory() as tmp:
|
| 202 |
+
meta_path = Path(tmp) / "checkpoint.json"
|
| 203 |
+
meta_path.write_text(json.dumps(payload, indent=2))
|
| 204 |
+
artifact = wandb.Artifact(
|
| 205 |
+
name=artifact_name,
|
| 206 |
+
type="model-pointer",
|
| 207 |
+
description=f"Pointer to {handle.hub_url}",
|
| 208 |
+
metadata=payload,
|
| 209 |
+
)
|
| 210 |
+
artifact.add_file(str(meta_path))
|
| 211 |
+
try:
|
| 212 |
+
wandb.run.log_artifact(artifact)
|
| 213 |
+
except Exception as exc: # noqa: BLE001
|
| 214 |
+
_log.warning("W&B artifact logging failed (non-fatal): %s", exc)
|
physix/training/loop.py
CHANGED
|
@@ -197,6 +197,7 @@ def train(config: TrainingConfig) -> None:
|
|
| 197 |
trainer.train(resume_from_checkpoint=config.resume_from_checkpoint)
|
| 198 |
|
| 199 |
_log_reward_summary(trainer)
|
|
|
|
| 200 |
|
| 201 |
_log.info("Saving adapter (%s) to %s", config.save_method, config.output_dir)
|
| 202 |
_save_artifacts(model, tokenizer, config)
|
|
@@ -204,15 +205,20 @@ def train(config: TrainingConfig) -> None:
|
|
| 204 |
|
| 205 |
|
| 206 |
def _log_reward_summary(trainer: "GRPOTrainer") -> None:
|
| 207 |
-
"""Emit a final reward-signal summary
|
| 208 |
-
GRPO's near-zero ``train/loss`` as a broken run. ``train/loss`` is just
|
| 209 |
-
the KL term; what matters is whether reward components moved.
|
| 210 |
|
| 211 |
Pulls the last ``log_history`` entry that contains reward keys and prints
|
| 212 |
-
the mean of every ``rewards/*/mean`` it finds
|
| 213 |
-
|
| 214 |
-
|
| 215 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 216 |
"""
|
| 217 |
history = getattr(trainer.state, "log_history", []) or []
|
| 218 |
reward_entries = [
|
|
@@ -240,14 +246,190 @@ def _log_reward_summary(trainer: "GRPOTrainer") -> None:
|
|
| 240 |
if isinstance(v0, (int, float)) and isinstance(v1, (int, float)):
|
| 241 |
_log.info(" %-40s %.4f → %.4f (Δ=%+.4f)", key, v0, v1, v1 - v0)
|
| 242 |
_log.info("-" * 60)
|
| 243 |
-
_log.info("
|
| 244 |
-
_log.info("
|
| 245 |
-
|
| 246 |
-
_log.info("
|
| 247 |
-
_log.info("
|
|
|
|
|
|
|
|
|
|
|
|
|
| 248 |
_log.info("=" * 60)
|
| 249 |
|
| 250 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 251 |
def _load_model_and_tokenizer(
|
| 252 |
config: TrainingConfig,
|
| 253 |
) -> tuple[FastLanguageModel, AutoTokenizer]:
|
|
@@ -482,9 +664,38 @@ class _WandbCheckpointCallback(TrainerCallback):
|
|
| 482 |
f"\n[wandb] Checkpoint repo pinned in run summary: {self._repo_url}\n",
|
| 483 |
flush=True,
|
| 484 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 485 |
except Exception as exc: # noqa: BLE001
|
| 486 |
_log.warning("Could not pin checkpoint repo to W&B summary: %s", exc)
|
| 487 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 488 |
def on_save(
|
| 489 |
self,
|
| 490 |
args: HFTrainingArguments,
|
|
@@ -523,7 +734,28 @@ class _WandbCheckpointCallback(TrainerCallback):
|
|
| 523 |
# Re-log the entire table each time so the latest version shows.
|
| 524 |
wandb.log({"checkpoint_history": self._table}, step=step)
|
| 525 |
|
| 526 |
-
# 4.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 527 |
print(
|
| 528 |
"\n"
|
| 529 |
"================ CHECKPOINT SAVED ================\n"
|
|
@@ -569,16 +801,19 @@ class _WandbCheckpointCallback(TrainerCallback):
|
|
| 569 |
|
| 570 |
|
| 571 |
def _build_grpo_config(config: TrainingConfig) -> GRPOConfig:
|
| 572 |
-
#
|
| 573 |
-
#
|
| 574 |
-
#
|
| 575 |
-
#
|
| 576 |
-
#
|
| 577 |
-
#
|
| 578 |
-
#
|
| 579 |
-
#
|
| 580 |
-
#
|
| 581 |
-
#
|
|
|
|
|
|
|
|
|
|
| 582 |
effective_batch = (
|
| 583 |
config.per_device_train_batch_size * config.gradient_accumulation_steps
|
| 584 |
)
|
|
|
|
| 197 |
trainer.train(resume_from_checkpoint=config.resume_from_checkpoint)
|
| 198 |
|
| 199 |
_log_reward_summary(trainer)
|
| 200 |
+
_render_training_curves(trainer, config)
|
| 201 |
|
| 202 |
_log.info("Saving adapter (%s) to %s", config.save_method, config.output_dir)
|
| 203 |
_save_artifacts(model, tokenizer, config)
|
|
|
|
| 205 |
|
| 206 |
|
| 207 |
def _log_reward_summary(trainer: "GRPOTrainer") -> None:
|
| 208 |
+
"""Emit a final reward-signal summary at end of training.
|
|
|
|
|
|
|
| 209 |
|
| 210 |
Pulls the last ``log_history`` entry that contains reward keys and prints
|
| 211 |
+
the mean of every ``rewards/*/mean`` it finds. If *no* reward keys are
|
| 212 |
+
present we hard-fail — that means the reward functions never produced a
|
| 213 |
+
non-NaN value, which is a real bug worth surfacing.
|
| 214 |
+
|
| 215 |
+
Note on ``train/loss``: this scalar IS the GRPO surrogate objective
|
| 216 |
+
(advantage-weighted token log-probabilities, plus the KL-to-ref penalty
|
| 217 |
+
when ``beta > 0``). Per the TRL docs (``trl/docs/source/grpo_trainer.md``)
|
| 218 |
+
the ``Trainer`` superclass logs the full surrogate as ``loss``, not just
|
| 219 |
+
the KL term. So ``train/loss`` collapsing without ``train/reward`` rising
|
| 220 |
+
is a real failure mode — typically a sign of reward hacking or saturated
|
| 221 |
+
advantages — and should be debugged, not dismissed.
|
| 222 |
"""
|
| 223 |
history = getattr(trainer.state, "log_history", []) or []
|
| 224 |
reward_entries = [
|
|
|
|
| 246 |
if isinstance(v0, (int, float)) and isinstance(v1, (int, float)):
|
| 247 |
_log.info(" %-40s %.4f → %.4f (Δ=%+.4f)", key, v0, v1, v1 - v0)
|
| 248 |
_log.info("-" * 60)
|
| 249 |
+
_log.info("Interpretation guide:")
|
| 250 |
+
_log.info(" train/loss — full GRPO surrogate (policy + KL*beta).")
|
| 251 |
+
_log.info(" Should DECREASE as advantages get exploited.")
|
| 252 |
+
_log.info(" train/reward — mean episode reward across rollouts.")
|
| 253 |
+
_log.info(" Should INCREASE; this is the headline curve.")
|
| 254 |
+
_log.info(" train/kl — KL(policy || ref). Should grow slowly.")
|
| 255 |
+
_log.info(" rewards/*/mean — per-component reward (match, simplicity, …).")
|
| 256 |
+
_log.info("Loss-down WITHOUT reward-up is a red flag (reward hacking or")
|
| 257 |
+
_log.info("advantage saturation).")
|
| 258 |
_log.info("=" * 60)
|
| 259 |
|
| 260 |
|
| 261 |
+
def _render_training_curves(
|
| 262 |
+
trainer: "GRPOTrainer",
|
| 263 |
+
config: TrainingConfig,
|
| 264 |
+
) -> None:
|
| 265 |
+
"""Render the headline training curves to PNG and ship them.
|
| 266 |
+
|
| 267 |
+
Why we do this in-process at end of training (instead of pulling from
|
| 268 |
+
W&B post-hoc):
|
| 269 |
+
|
| 270 |
+
1. The competition's automated validation requires PNG plots committed
|
| 271 |
+
to the public repo at submission time. Wandb-only links don't count.
|
| 272 |
+
2. ``trainer.state.log_history`` already contains every metric the
|
| 273 |
+
Trainer logged step-by-step — no API roundtrip needed.
|
| 274 |
+
3. We can also push the PNGs to the model Hub repo so they're discoverable
|
| 275 |
+
from the model card without a separate deploy step.
|
| 276 |
+
|
| 277 |
+
Renders three curves:
|
| 278 |
+
|
| 279 |
+
- ``loss.png`` — ``train/loss`` over global step.
|
| 280 |
+
GRPO surrogate; SHOULD trend down.
|
| 281 |
+
- ``reward.png`` — ``reward`` (or ``train/reward``) over step
|
| 282 |
+
with ±1σ band. SHOULD trend up.
|
| 283 |
+
- ``reward_components.png`` — overlay of every ``rewards/<name>/mean``
|
| 284 |
+
so reward hacking shows up visually
|
| 285 |
+
(e.g. ``simplicity`` rising while
|
| 286 |
+
``match`` regresses).
|
| 287 |
+
|
| 288 |
+
Failures are logged and swallowed — a missing plot must not crash a
|
| 289 |
+
successful training run, since the model artefact is still useful.
|
| 290 |
+
"""
|
| 291 |
+
try:
|
| 292 |
+
import matplotlib
|
| 293 |
+
matplotlib.use("Agg") # headless / no display server in HF Jobs
|
| 294 |
+
import matplotlib.pyplot as plt
|
| 295 |
+
except Exception as exc: # noqa: BLE001
|
| 296 |
+
_log.warning("matplotlib unavailable, skipping curve PNGs: %s", exc)
|
| 297 |
+
return
|
| 298 |
+
|
| 299 |
+
history = list(getattr(trainer.state, "log_history", []) or [])
|
| 300 |
+
if not history:
|
| 301 |
+
_log.warning("No log_history found — cannot render curves.")
|
| 302 |
+
return
|
| 303 |
+
|
| 304 |
+
plots_dir = Path(config.output_dir) / "plots"
|
| 305 |
+
plots_dir.mkdir(parents=True, exist_ok=True)
|
| 306 |
+
|
| 307 |
+
def _series(metric: str) -> tuple[list[int], list[float]]:
|
| 308 |
+
xs: list[int] = []
|
| 309 |
+
ys: list[float] = []
|
| 310 |
+
for entry in history:
|
| 311 |
+
if metric in entry and "step" in entry:
|
| 312 |
+
value = entry[metric]
|
| 313 |
+
if isinstance(value, (int, float)):
|
| 314 |
+
xs.append(int(entry["step"]))
|
| 315 |
+
ys.append(float(value))
|
| 316 |
+
return xs, ys
|
| 317 |
+
|
| 318 |
+
rendered: list[Path] = []
|
| 319 |
+
|
| 320 |
+
# 1) Loss — the GRPO surrogate.
|
| 321 |
+
steps_l, losses = _series("loss")
|
| 322 |
+
if steps_l:
|
| 323 |
+
fig, ax = plt.subplots(figsize=(8, 4.5))
|
| 324 |
+
ax.plot(steps_l, losses, color="#d62728", linewidth=1.8)
|
| 325 |
+
ax.set_xlabel("training step")
|
| 326 |
+
ax.set_ylabel("GRPO surrogate loss")
|
| 327 |
+
ax.set_title("PhysiX GRPO — train/loss (lower is better)")
|
| 328 |
+
ax.grid(alpha=0.3)
|
| 329 |
+
path = plots_dir / "loss.png"
|
| 330 |
+
fig.tight_layout()
|
| 331 |
+
fig.savefig(path, dpi=140)
|
| 332 |
+
plt.close(fig)
|
| 333 |
+
rendered.append(path)
|
| 334 |
+
else:
|
| 335 |
+
_log.warning("No 'loss' entries in log_history.")
|
| 336 |
+
|
| 337 |
+
# 2) Reward — headline curve (with ±std band when available).
|
| 338 |
+
steps_r, rewards = _series("reward")
|
| 339 |
+
_, reward_std = _series("reward_std")
|
| 340 |
+
if steps_r:
|
| 341 |
+
fig, ax = plt.subplots(figsize=(8, 4.5))
|
| 342 |
+
ax.plot(steps_r, rewards, color="#2ca02c", linewidth=2.0, label="mean reward")
|
| 343 |
+
if reward_std and len(reward_std) == len(rewards):
|
| 344 |
+
import numpy as np
|
| 345 |
+
r = np.asarray(rewards)
|
| 346 |
+
s = np.asarray(reward_std)
|
| 347 |
+
ax.fill_between(steps_r, r - s, r + s, color="#2ca02c", alpha=0.18,
|
| 348 |
+
label="±1σ across rollouts")
|
| 349 |
+
ax.set_xlabel("training step")
|
| 350 |
+
ax.set_ylabel("mean reward (sum of components)")
|
| 351 |
+
ax.set_title("PhysiX GRPO — train/reward (higher is better)")
|
| 352 |
+
ax.legend(loc="best")
|
| 353 |
+
ax.grid(alpha=0.3)
|
| 354 |
+
path = plots_dir / "reward.png"
|
| 355 |
+
fig.tight_layout()
|
| 356 |
+
fig.savefig(path, dpi=140)
|
| 357 |
+
plt.close(fig)
|
| 358 |
+
rendered.append(path)
|
| 359 |
+
else:
|
| 360 |
+
_log.warning("No 'reward' entries in log_history.")
|
| 361 |
+
|
| 362 |
+
# 3) Per-component reward overlay — exposes reward hacking patterns.
|
| 363 |
+
component_keys = sorted({
|
| 364 |
+
k for entry in history for k in entry
|
| 365 |
+
if k.startswith("rewards/") and k.endswith("/mean")
|
| 366 |
+
})
|
| 367 |
+
if component_keys:
|
| 368 |
+
fig, ax = plt.subplots(figsize=(8, 4.5))
|
| 369 |
+
for k in component_keys:
|
| 370 |
+
xs, ys = _series(k)
|
| 371 |
+
if xs:
|
| 372 |
+
label = k.removeprefix("rewards/").removesuffix("/mean")
|
| 373 |
+
ax.plot(xs, ys, linewidth=1.6, label=label)
|
| 374 |
+
ax.set_xlabel("training step")
|
| 375 |
+
ax.set_ylabel("component mean reward")
|
| 376 |
+
ax.set_title("PhysiX GRPO — per-component reward (rewards/*/mean)")
|
| 377 |
+
ax.legend(loc="best", fontsize=8)
|
| 378 |
+
ax.grid(alpha=0.3)
|
| 379 |
+
path = plots_dir / "reward_components.png"
|
| 380 |
+
fig.tight_layout()
|
| 381 |
+
fig.savefig(path, dpi=140)
|
| 382 |
+
plt.close(fig)
|
| 383 |
+
rendered.append(path)
|
| 384 |
+
|
| 385 |
+
if not rendered:
|
| 386 |
+
_log.warning("No PNGs rendered — log_history had no recognised metrics.")
|
| 387 |
+
return
|
| 388 |
+
|
| 389 |
+
_log.info("Rendered %d curve PNG(s) to %s", len(rendered), plots_dir)
|
| 390 |
+
|
| 391 |
+
# Log the PNGs as wandb.Images so they appear in the run's Media tab,
|
| 392 |
+
# and persist to the run summary as a reference table.
|
| 393 |
+
try:
|
| 394 |
+
import wandb
|
| 395 |
+
if wandb.run is not None:
|
| 396 |
+
wandb.log({
|
| 397 |
+
f"plots/{p.stem}": wandb.Image(str(p)) for p in rendered
|
| 398 |
+
})
|
| 399 |
+
_log.info("Logged %d plot(s) to wandb.Media", len(rendered))
|
| 400 |
+
except Exception as exc: # noqa: BLE001
|
| 401 |
+
_log.warning("Could not log plots to wandb: %s", exc)
|
| 402 |
+
|
| 403 |
+
# Push PNGs to the final Hub model repo under ``plots/`` so the model
|
| 404 |
+
# card can render them and ``sync-plots.sh`` can pull them locally.
|
| 405 |
+
if config.push_to_hub and config.hub_repo_id:
|
| 406 |
+
try:
|
| 407 |
+
from huggingface_hub import HfApi, create_repo
|
| 408 |
+
|
| 409 |
+
api = HfApi(token=os.environ.get("HUGGINGFACE_HUB_TOKEN"))
|
| 410 |
+
create_repo(
|
| 411 |
+
repo_id=config.hub_repo_id,
|
| 412 |
+
repo_type="model",
|
| 413 |
+
exist_ok=True,
|
| 414 |
+
token=os.environ.get("HUGGINGFACE_HUB_TOKEN"),
|
| 415 |
+
)
|
| 416 |
+
for p in rendered:
|
| 417 |
+
api.upload_file(
|
| 418 |
+
path_or_fileobj=str(p),
|
| 419 |
+
path_in_repo=f"plots/{p.name}",
|
| 420 |
+
repo_id=config.hub_repo_id,
|
| 421 |
+
repo_type="model",
|
| 422 |
+
commit_message=f"plots: {p.name}",
|
| 423 |
+
)
|
| 424 |
+
_log.info(
|
| 425 |
+
"Pushed %d plot(s) to https://huggingface.co/%s/tree/main/plots",
|
| 426 |
+
len(rendered),
|
| 427 |
+
config.hub_repo_id,
|
| 428 |
+
)
|
| 429 |
+
except Exception as exc: # noqa: BLE001
|
| 430 |
+
_log.warning("Could not push plots to Hub: %s", exc)
|
| 431 |
+
|
| 432 |
+
|
| 433 |
def _load_model_and_tokenizer(
|
| 434 |
config: TrainingConfig,
|
| 435 |
) -> tuple[FastLanguageModel, AutoTokenizer]:
|
|
|
|
| 664 |
f"\n[wandb] Checkpoint repo pinned in run summary: {self._repo_url}\n",
|
| 665 |
flush=True,
|
| 666 |
)
|
| 667 |
+
|
| 668 |
+
# Stash the W&B run id at the *root* of the checkpoint repo so a
|
| 669 |
+
# future re-launch can find it without W&B API calls. Atomic with
|
| 670 |
+
# checkpoint storage, ~36 bytes. We do this once at train begin
|
| 671 |
+
# instead of every save to avoid 200 redundant commits.
|
| 672 |
+
self._publish_wandb_run_id(wandb.run.id)
|
| 673 |
except Exception as exc: # noqa: BLE001
|
| 674 |
_log.warning("Could not pin checkpoint repo to W&B summary: %s", exc)
|
| 675 |
|
| 676 |
+
def _publish_wandb_run_id(self, run_id: str) -> None:
|
| 677 |
+
try:
|
| 678 |
+
import tempfile
|
| 679 |
+
from huggingface_hub import HfApi, create_repo
|
| 680 |
+
|
| 681 |
+
token = os.environ.get("HUGGINGFACE_HUB_TOKEN") or os.environ.get("HF_TOKEN")
|
| 682 |
+
api = HfApi(token=token)
|
| 683 |
+
create_repo(self._repo, exist_ok=True, repo_type="model", token=token)
|
| 684 |
+
with tempfile.NamedTemporaryFile("w", suffix=".txt", delete=False) as tmp:
|
| 685 |
+
tmp.write(run_id)
|
| 686 |
+
tmp_path = tmp.name
|
| 687 |
+
api.upload_file(
|
| 688 |
+
path_or_fileobj=tmp_path,
|
| 689 |
+
path_in_repo="wandb_run_id.txt",
|
| 690 |
+
repo_id=self._repo,
|
| 691 |
+
repo_type="model",
|
| 692 |
+
commit_message=f"Pin W&B run id {run_id}",
|
| 693 |
+
token=token,
|
| 694 |
+
)
|
| 695 |
+
print(f"[wandb] Published run_id={run_id} to {self._repo_url}/wandb_run_id.txt", flush=True)
|
| 696 |
+
except Exception as exc: # noqa: BLE001
|
| 697 |
+
_log.warning("Could not publish wandb run id (non-fatal): %s", exc)
|
| 698 |
+
|
| 699 |
def on_save(
|
| 700 |
self,
|
| 701 |
args: HFTrainingArguments,
|
|
|
|
| 734 |
# Re-log the entire table each time so the latest version shows.
|
| 735 |
wandb.log({"checkpoint_history": self._table}, step=step)
|
| 736 |
|
| 737 |
+
# 4. Pointer-only W&B Artifact (~200 bytes JSON). Doesn't upload
|
| 738 |
+
# weights — those are on the Hub already — but makes every
|
| 739 |
+
# checkpoint a first-class, addressable W&B artifact that can
|
| 740 |
+
# be looked up later by `wandb artifact get`. Side effect:
|
| 741 |
+
# populates the run's "Artifacts" panel with one entry per save.
|
| 742 |
+
if commit_sha:
|
| 743 |
+
from physix.training.checkpoints import (
|
| 744 |
+
CheckpointHandle,
|
| 745 |
+
log_link_artifact_to_wandb,
|
| 746 |
+
)
|
| 747 |
+
handle = CheckpointHandle(
|
| 748 |
+
repo_id=self._repo,
|
| 749 |
+
subfolder=f"checkpoint-{step}",
|
| 750 |
+
revision=commit_sha,
|
| 751 |
+
step=step,
|
| 752 |
+
)
|
| 753 |
+
log_link_artifact_to_wandb(
|
| 754 |
+
handle,
|
| 755 |
+
artifact_name="physix-grpo-checkpoint",
|
| 756 |
+
)
|
| 757 |
+
|
| 758 |
+
# 5. Stdout banner — also visible in `hf jobs logs`.
|
| 759 |
print(
|
| 760 |
"\n"
|
| 761 |
"================ CHECKPOINT SAVED ================\n"
|
|
|
|
| 801 |
|
| 802 |
|
| 803 |
def _build_grpo_config(config: TrainingConfig) -> GRPOConfig:
|
| 804 |
+
# Note on the metrics this run will produce in W&B (per TRL docs):
|
| 805 |
+
# train/loss — the GRPO surrogate objective being minimized.
|
| 806 |
+
# = -E[advantage * logπ(action|state)] + β * KL.
|
| 807 |
+
# Should DECREASE as the policy exploits advantages.
|
| 808 |
+
# train/reward — mean total reward per rollout. Should INCREASE.
|
| 809 |
+
# train/kl — KL(policy || reference). Bounded by β; grows slowly.
|
| 810 |
+
# rewards/<f>/mean — per-component reward (one per reward function).
|
| 811 |
+
#
|
| 812 |
+
# ``train/loss`` going to ~0 *only* if ``train/reward`` rises in lockstep
|
| 813 |
+
# is fine — it just means advantages got fully exploited. Loss collapsing
|
| 814 |
+
# without reward growth is reward hacking, broken parsing, or a saturated
|
| 815 |
+
# KL anchor. We surface both via _log_reward_summary at end of training
|
| 816 |
+
# AND via _GenerateCurvesCallback which renders both curves to PNG.
|
| 817 |
effective_batch = (
|
| 818 |
config.per_device_train_batch_size * config.gradient_accumulation_steps
|
| 819 |
)
|
physix/training/sft.py
CHANGED
|
@@ -145,6 +145,8 @@ def train_sft(
|
|
| 145 |
instances_per_system: int = 32,
|
| 146 |
seed: int = 0,
|
| 147 |
wandb_run_name: str | None = None,
|
|
|
|
|
|
|
| 148 |
) -> None:
|
| 149 |
_configure_logging()
|
| 150 |
|
|
@@ -251,6 +253,43 @@ def train_sft(
|
|
| 251 |
save_method="merged_16bit",
|
| 252 |
)
|
| 253 |
_log.info("SFT model (merged 16-bit) saved → %s", out_path)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 254 |
wandb.finish()
|
| 255 |
|
| 256 |
|
|
@@ -274,6 +313,14 @@ def main() -> None:
|
|
| 274 |
parser.add_argument("--seed", type=int, default=0)
|
| 275 |
parser.add_argument("--wandb-run-name", default=None,
|
| 276 |
help="Override W&B run name. Defaults to physix-sft-{epochs}ep.")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 277 |
args = parser.parse_args()
|
| 278 |
|
| 279 |
os.environ.setdefault("WANDB_PROJECT", "physix-live")
|
|
@@ -286,6 +333,8 @@ def main() -> None:
|
|
| 286 |
instances_per_system=args.instances_per_system,
|
| 287 |
seed=args.seed,
|
| 288 |
wandb_run_name=args.wandb_run_name,
|
|
|
|
|
|
|
| 289 |
)
|
| 290 |
|
| 291 |
|
|
|
|
| 145 |
instances_per_system: int = 32,
|
| 146 |
seed: int = 0,
|
| 147 |
wandb_run_name: str | None = None,
|
| 148 |
+
hub_checkpoint_repo_id: str | None = None,
|
| 149 |
+
hub_token: str | None = None,
|
| 150 |
) -> None:
|
| 151 |
_configure_logging()
|
| 152 |
|
|
|
|
| 253 |
save_method="merged_16bit",
|
| 254 |
)
|
| 255 |
_log.info("SFT model (merged 16-bit) saved → %s", out_path)
|
| 256 |
+
|
| 257 |
+
if hub_checkpoint_repo_id:
|
| 258 |
+
# Push the merged SFT model to the same checkpoint repo GRPO uses,
|
| 259 |
+
# under a fixed `sft/` subfolder. Re-runs overwrite the subfolder
|
| 260 |
+
# but produce a new commit, so the revision SHA still uniquely
|
| 261 |
+
# identifies *this* SFT result.
|
| 262 |
+
from physix.training.checkpoints import (
|
| 263 |
+
SFT_SUBFOLDER,
|
| 264 |
+
log_link_artifact_to_wandb,
|
| 265 |
+
push_checkpoint_to_hub,
|
| 266 |
+
)
|
| 267 |
+
|
| 268 |
+
try:
|
| 269 |
+
handle = push_checkpoint_to_hub(
|
| 270 |
+
local_dir=out_path,
|
| 271 |
+
repo_id=hub_checkpoint_repo_id,
|
| 272 |
+
subfolder=SFT_SUBFOLDER,
|
| 273 |
+
commit_message=(
|
| 274 |
+
f"SFT merged_16bit: {model_name} | "
|
| 275 |
+
f"epochs={epochs} lora_r={lora_r}"
|
| 276 |
+
),
|
| 277 |
+
token=hub_token,
|
| 278 |
+
)
|
| 279 |
+
_log.info("SFT checkpoint pushed to Hub: %s", handle.hub_url)
|
| 280 |
+
wandb.run.summary["sft/hub_repo"] = handle.repo_id
|
| 281 |
+
wandb.run.summary["sft/hub_url"] = handle.hub_url
|
| 282 |
+
wandb.run.summary["sft/hub_revision"] = handle.revision
|
| 283 |
+
log_link_artifact_to_wandb(
|
| 284 |
+
handle,
|
| 285 |
+
artifact_name="physix-sft-checkpoint",
|
| 286 |
+
extra={"model_name": model_name, "epochs": epochs, "lora_r": lora_r},
|
| 287 |
+
)
|
| 288 |
+
except Exception as exc: # noqa: BLE001
|
| 289 |
+
# Don't kill SFT just because the hub push failed; the GRPO step
|
| 290 |
+
# downstream can fall back to the local /tmp checkpoint.
|
| 291 |
+
_log.error("SFT hub push failed (non-fatal): %s", exc)
|
| 292 |
+
|
| 293 |
wandb.finish()
|
| 294 |
|
| 295 |
|
|
|
|
| 313 |
parser.add_argument("--seed", type=int, default=0)
|
| 314 |
parser.add_argument("--wandb-run-name", default=None,
|
| 315 |
help="Override W&B run name. Defaults to physix-sft-{epochs}ep.")
|
| 316 |
+
parser.add_argument(
|
| 317 |
+
"--hub-checkpoint-repo-id",
|
| 318 |
+
default=None,
|
| 319 |
+
help=(
|
| 320 |
+
"If set, push the merged SFT model to <repo>/sft on the Hub "
|
| 321 |
+
"and log a pointer-only artifact to W&B."
|
| 322 |
+
),
|
| 323 |
+
)
|
| 324 |
args = parser.parse_args()
|
| 325 |
|
| 326 |
os.environ.setdefault("WANDB_PROJECT", "physix-live")
|
|
|
|
| 333 |
instances_per_system=args.instances_per_system,
|
| 334 |
seed=args.seed,
|
| 335 |
wandb_run_name=args.wandb_run_name,
|
| 336 |
+
hub_checkpoint_repo_id=args.hub_checkpoint_repo_id,
|
| 337 |
+
hub_token=os.environ.get("HF_TOKEN"),
|
| 338 |
)
|
| 339 |
|
| 340 |
|
pyproject.toml
CHANGED
|
@@ -33,6 +33,11 @@ train = [
|
|
| 33 |
"wandb>=0.16",
|
| 34 |
"datasets>=3.0",
|
| 35 |
"huggingface_hub>=0.24,<1.0",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
]
|
| 37 |
demo = ["ollama>=0.4"]
|
| 38 |
|
|
|
|
| 33 |
"wandb>=0.16",
|
| 34 |
"datasets>=3.0",
|
| 35 |
"huggingface_hub>=0.24,<1.0",
|
| 36 |
+
# Used by physix.training.loop._render_training_curves to write
|
| 37 |
+
# loss / reward / per-component PNGs after GRPO training. Required so
|
| 38 |
+
# the run produces the repo-committable plots that the competition
|
| 39 |
+
# validator checks for.
|
| 40 |
+
"matplotlib>=3.7",
|
| 41 |
]
|
| 42 |
demo = ["ollama>=0.4"]
|
| 43 |
|