Pratyush-01 commited on
Commit
70b3d0c
·
verified ·
1 Parent(s): d5f6dbd

Add _render_training_curves + matplotlib dep + plot deliverables in README

Browse files
physix-live/README.md CHANGED
@@ -1,13 +1,53 @@
1
  # PhysiX-Live
2
 
3
- **One-line pitch:** an OpenEnv RL environment where a small (1.5B) language model iteratively
4
  discovers equations of motion from trajectory data plus a one-sentence English hint —
5
  verifier is `scipy.integrate.odeint` plus per-step R², no LLM-as-judge in the reward loop.
6
 
7
  A submission for the **OpenEnv hackathon** (Apr 2026). The deliverables are: a clean
8
- OpenEnv-compatible env, a TRL+Unsloth+GRPO training pipeline targeting Qwen2.5-1.5B with
9
- LoRA-32, a React + TypeScript + Tailwind demo UI that animates trajectories side-by-side
10
- for the trained vs. untrained model, and a recording script for pre-baked demo episodes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
  ---
13
 
 
1
  # PhysiX-Live
2
 
3
+ **One-line pitch:** an OpenEnv RL environment where a small language model iteratively
4
  discovers equations of motion from trajectory data plus a one-sentence English hint —
5
  verifier is `scipy.integrate.odeint` plus per-step R², no LLM-as-judge in the reward loop.
6
 
7
  A submission for the **OpenEnv hackathon** (Apr 2026). The deliverables are: a clean
8
+ OpenEnv-compatible env, a TRL+Unsloth+GRPO training pipeline targeting Qwen2.5 (1.5B / 3B
9
+ profiles) with LoRA, a React + TypeScript + Tailwind demo UI that animates trajectories
10
+ side-by-side for the trained vs. untrained model, and a recording script for pre-baked
11
+ demo episodes.
12
+
13
+ ---
14
+
15
+ ## Deliverables
16
+
17
+ | Deliverable | Where |
18
+ | ---------------------------- | ---------------------------------------------------------------------------------- |
19
+ | **Public HF Space (live demo)** | https://huggingface.co/spaces/Pratyush-01/physix-live |
20
+ | **Training driver script** | [`physix-train/job_train.py`](../physix-train/job_train.py) — PEP 723 inline-deps UV script, runs end-to-end on `hf jobs uv run` |
21
+ | **GRPO training loop** | [`physix/training/loop.py`](physix/training/loop.py) — Unsloth + TRL GRPOTrainer |
22
+ | **SFT warm-start** | [`physix/training/sft.py`](physix/training/sft.py) |
23
+ | **Trained adapters (Hub)** | [`Pratyush-01/physix-3b-rl`](https://huggingface.co/Pratyush-01/physix-3b-rl) |
24
+ | **Mid-run checkpoints** | [`Pratyush-01/physix-3b-rl-ckpt`](https://huggingface.co/Pratyush-01/physix-3b-rl-ckpt) |
25
+ | **W&B project** | https://wandb.ai/pratyush01/physix-live |
26
+ | **Writeup** | [`docs/writeup.md`](docs/writeup.md) |
27
+
28
+ ## Training curves
29
+
30
+ Both curves are auto-generated at end of every GRPO run by
31
+ `physix.training.loop._render_training_curves` and committed to the repo at
32
+ `docs/plots/`. The interpretation rules:
33
+
34
+ - **`train/loss`** is the GRPO surrogate (advantage-weighted log-prob + β·KL).
35
+ Should trend **down** as advantages get exploited. (Per
36
+ [TRL docs](https://huggingface.co/docs/trl/main/logging) — this is the full
37
+ surrogate, not just the KL term.)
38
+ - **`train/reward`** is mean total reward across rollouts. Should trend **up**.
39
+ This is the headline curve.
40
+ - **Per-component reward** breaks `train/reward` into the 5 reward functions
41
+ (`match`, `match_dense`, `correctness`, `simplicity`, `format`). Used to spot
42
+ reward hacking — e.g. `simplicity` rising while `match` regresses.
43
+
44
+ | Loss (down is good) | Reward (up is good) |
45
+ | --- | --- |
46
+ | ![GRPO loss curve](docs/plots/loss.png) | ![GRPO reward curve](docs/plots/reward.png) |
47
+
48
+ | Per-component reward (anti-hack diagnostic) |
49
+ | --- |
50
+ | ![Reward components overlay](docs/plots/reward_components.png) |
51
 
52
  ---
53
 
physix-live/docs/plots/README.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Training Curves
2
+
3
+ PNGs in this directory are auto-generated by
4
+ `physix.training.loop._render_training_curves` at end of every GRPO run, then
5
+ mirrored from the HF model repo via `physix-train/sync-plots.sh`.
6
+
7
+ Files:
8
+
9
+ - `loss.png` — GRPO surrogate loss over training steps.
10
+ - `reward.png` — Mean reward (with ±1σ band) over training steps.
11
+ - `reward_components.png` — Per-component reward (`match`, `match_dense`,
12
+ `correctness`, `simplicity`, `format`).
13
+
14
+ To regenerate locally after a job:
15
+
16
+ ./physix-train/sync-plots.sh Pratyush-01/physix-3b-rl
physix-live/physix/training/loop.py CHANGED
@@ -197,6 +197,7 @@ def train(config: TrainingConfig) -> None:
197
  trainer.train(resume_from_checkpoint=config.resume_from_checkpoint)
198
 
199
  _log_reward_summary(trainer)
 
200
 
201
  _log.info("Saving adapter (%s) to %s", config.save_method, config.output_dir)
202
  _save_artifacts(model, tokenizer, config)
@@ -204,15 +205,20 @@ def train(config: TrainingConfig) -> None:
204
 
205
 
206
  def _log_reward_summary(trainer: "GRPOTrainer") -> None:
207
- """Emit a final reward-signal summary so log readers don't misinterpret
208
- GRPO's near-zero ``train/loss`` as a broken run. ``train/loss`` is just
209
- the KL term; what matters is whether reward components moved.
210
 
211
  Pulls the last ``log_history`` entry that contains reward keys and prints
212
- the mean of every ``rewards/*/mean`` it finds, plus an explicit
213
- interpretation hint. If *no* reward keys are present we hard-fail — that
214
- means the reward functions never produced a non-NaN value, which is a
215
- real bug worth surfacing.
 
 
 
 
 
 
 
216
  """
217
  history = getattr(trainer.state, "log_history", []) or []
218
  reward_entries = [
@@ -240,14 +246,190 @@ def _log_reward_summary(trainer: "GRPOTrainer") -> None:
240
  if isinstance(v0, (int, float)) and isinstance(v1, (int, float)):
241
  _log.info(" %-40s %.4f → %.4f (Δ=%+.4f)", key, v0, v1, v1 - v0)
242
  _log.info("-" * 60)
243
- _log.info("NOTE: train/loss near zero is EXPECTED for GRPO — it is only")
244
- _log.info("the KL-term contribution (beta=%.3f). The model learns via the",
245
- trainer.args.beta)
246
- _log.info("advantage-weighted policy gradient, which doesn't appear in")
247
- _log.info("the displayed loss scalar. Trust `train/reward` and `rewards/*`.")
 
 
 
 
248
  _log.info("=" * 60)
249
 
250
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
251
  def _load_model_and_tokenizer(
252
  config: TrainingConfig,
253
  ) -> tuple[FastLanguageModel, AutoTokenizer]:
@@ -569,16 +751,19 @@ class _WandbCheckpointCallback(TrainerCallback):
569
 
570
 
571
  def _build_grpo_config(config: TrainingConfig) -> GRPOConfig:
572
- # NOTE on "train/loss 0" — this is expected GRPO behaviour, not a bug.
573
- # The scalar TRL logs as `train/loss` is *only* the KL-divergence term
574
- # weighted by beta; the advantage-weighted policy-gradient term that
575
- # actually drives learning contributes gradients but is not in the
576
- # displayed loss. At step 0, policy == reference → KL = 0 → loss = 0.
577
- # As the policy drifts, loss rises slightly (with beta=0.04 typically
578
- # to ~0.001–0.05). The signal you care about is `train/rewards/*` and
579
- # `train/reward`, not `train/loss`. See:
580
- # https://github.com/huggingface/trl/issues/2703
581
- # https://github.com/huggingface/open-r1/issues/239
 
 
 
582
  effective_batch = (
583
  config.per_device_train_batch_size * config.gradient_accumulation_steps
584
  )
 
197
  trainer.train(resume_from_checkpoint=config.resume_from_checkpoint)
198
 
199
  _log_reward_summary(trainer)
200
+ _render_training_curves(trainer, config)
201
 
202
  _log.info("Saving adapter (%s) to %s", config.save_method, config.output_dir)
203
  _save_artifacts(model, tokenizer, config)
 
205
 
206
 
207
  def _log_reward_summary(trainer: "GRPOTrainer") -> None:
208
+ """Emit a final reward-signal summary at end of training.
 
 
209
 
210
  Pulls the last ``log_history`` entry that contains reward keys and prints
211
+ the mean of every ``rewards/*/mean`` it finds. If *no* reward keys are
212
+ present we hard-fail — that means the reward functions never produced a
213
+ non-NaN value, which is a real bug worth surfacing.
214
+
215
+ Note on ``train/loss``: this scalar IS the GRPO surrogate objective
216
+ (advantage-weighted token log-probabilities, plus the KL-to-ref penalty
217
+ when ``beta > 0``). Per the TRL docs (``trl/docs/source/grpo_trainer.md``)
218
+ the ``Trainer`` superclass logs the full surrogate as ``loss``, not just
219
+ the KL term. So ``train/loss`` collapsing without ``train/reward`` rising
220
+ is a real failure mode — typically a sign of reward hacking or saturated
221
+ advantages — and should be debugged, not dismissed.
222
  """
223
  history = getattr(trainer.state, "log_history", []) or []
224
  reward_entries = [
 
246
  if isinstance(v0, (int, float)) and isinstance(v1, (int, float)):
247
  _log.info(" %-40s %.4f → %.4f (Δ=%+.4f)", key, v0, v1, v1 - v0)
248
  _log.info("-" * 60)
249
+ _log.info("Interpretation guide:")
250
+ _log.info(" train/loss — full GRPO surrogate (policy + KL*beta).")
251
+ _log.info(" Should DECREASE as advantages get exploited.")
252
+ _log.info(" train/reward — mean episode reward across rollouts.")
253
+ _log.info(" Should INCREASE; this is the headline curve.")
254
+ _log.info(" train/kl — KL(policy || ref). Should grow slowly.")
255
+ _log.info(" rewards/*/mean — per-component reward (match, simplicity, …).")
256
+ _log.info("Loss-down WITHOUT reward-up is a red flag (reward hacking or")
257
+ _log.info("advantage saturation).")
258
  _log.info("=" * 60)
259
 
260
 
261
+ def _render_training_curves(
262
+ trainer: "GRPOTrainer",
263
+ config: TrainingConfig,
264
+ ) -> None:
265
+ """Render the headline training curves to PNG and ship them.
266
+
267
+ Why we do this in-process at end of training (instead of pulling from
268
+ W&B post-hoc):
269
+
270
+ 1. The competition's automated validation requires PNG plots committed
271
+ to the public repo at submission time. Wandb-only links don't count.
272
+ 2. ``trainer.state.log_history`` already contains every metric the
273
+ Trainer logged step-by-step — no API roundtrip needed.
274
+ 3. We can also push the PNGs to the model Hub repo so they're discoverable
275
+ from the model card without a separate deploy step.
276
+
277
+ Renders three curves:
278
+
279
+ - ``loss.png`` — ``train/loss`` over global step.
280
+ GRPO surrogate; SHOULD trend down.
281
+ - ``reward.png`` — ``reward`` (or ``train/reward``) over step
282
+ with ±1σ band. SHOULD trend up.
283
+ - ``reward_components.png`` — overlay of every ``rewards/<name>/mean``
284
+ so reward hacking shows up visually
285
+ (e.g. ``simplicity`` rising while
286
+ ``match`` regresses).
287
+
288
+ Failures are logged and swallowed — a missing plot must not crash a
289
+ successful training run, since the model artefact is still useful.
290
+ """
291
+ try:
292
+ import matplotlib
293
+ matplotlib.use("Agg") # headless / no display server in HF Jobs
294
+ import matplotlib.pyplot as plt
295
+ except Exception as exc: # noqa: BLE001
296
+ _log.warning("matplotlib unavailable, skipping curve PNGs: %s", exc)
297
+ return
298
+
299
+ history = list(getattr(trainer.state, "log_history", []) or [])
300
+ if not history:
301
+ _log.warning("No log_history found — cannot render curves.")
302
+ return
303
+
304
+ plots_dir = Path(config.output_dir) / "plots"
305
+ plots_dir.mkdir(parents=True, exist_ok=True)
306
+
307
+ def _series(metric: str) -> tuple[list[int], list[float]]:
308
+ xs: list[int] = []
309
+ ys: list[float] = []
310
+ for entry in history:
311
+ if metric in entry and "step" in entry:
312
+ value = entry[metric]
313
+ if isinstance(value, (int, float)):
314
+ xs.append(int(entry["step"]))
315
+ ys.append(float(value))
316
+ return xs, ys
317
+
318
+ rendered: list[Path] = []
319
+
320
+ # 1) Loss — the GRPO surrogate.
321
+ steps_l, losses = _series("loss")
322
+ if steps_l:
323
+ fig, ax = plt.subplots(figsize=(8, 4.5))
324
+ ax.plot(steps_l, losses, color="#d62728", linewidth=1.8)
325
+ ax.set_xlabel("training step")
326
+ ax.set_ylabel("GRPO surrogate loss")
327
+ ax.set_title("PhysiX GRPO — train/loss (lower is better)")
328
+ ax.grid(alpha=0.3)
329
+ path = plots_dir / "loss.png"
330
+ fig.tight_layout()
331
+ fig.savefig(path, dpi=140)
332
+ plt.close(fig)
333
+ rendered.append(path)
334
+ else:
335
+ _log.warning("No 'loss' entries in log_history.")
336
+
337
+ # 2) Reward — headline curve (with ±std band when available).
338
+ steps_r, rewards = _series("reward")
339
+ _, reward_std = _series("reward_std")
340
+ if steps_r:
341
+ fig, ax = plt.subplots(figsize=(8, 4.5))
342
+ ax.plot(steps_r, rewards, color="#2ca02c", linewidth=2.0, label="mean reward")
343
+ if reward_std and len(reward_std) == len(rewards):
344
+ import numpy as np
345
+ r = np.asarray(rewards)
346
+ s = np.asarray(reward_std)
347
+ ax.fill_between(steps_r, r - s, r + s, color="#2ca02c", alpha=0.18,
348
+ label="±1σ across rollouts")
349
+ ax.set_xlabel("training step")
350
+ ax.set_ylabel("mean reward (sum of components)")
351
+ ax.set_title("PhysiX GRPO — train/reward (higher is better)")
352
+ ax.legend(loc="best")
353
+ ax.grid(alpha=0.3)
354
+ path = plots_dir / "reward.png"
355
+ fig.tight_layout()
356
+ fig.savefig(path, dpi=140)
357
+ plt.close(fig)
358
+ rendered.append(path)
359
+ else:
360
+ _log.warning("No 'reward' entries in log_history.")
361
+
362
+ # 3) Per-component reward overlay — exposes reward hacking patterns.
363
+ component_keys = sorted({
364
+ k for entry in history for k in entry
365
+ if k.startswith("rewards/") and k.endswith("/mean")
366
+ })
367
+ if component_keys:
368
+ fig, ax = plt.subplots(figsize=(8, 4.5))
369
+ for k in component_keys:
370
+ xs, ys = _series(k)
371
+ if xs:
372
+ label = k.removeprefix("rewards/").removesuffix("/mean")
373
+ ax.plot(xs, ys, linewidth=1.6, label=label)
374
+ ax.set_xlabel("training step")
375
+ ax.set_ylabel("component mean reward")
376
+ ax.set_title("PhysiX GRPO — per-component reward (rewards/*/mean)")
377
+ ax.legend(loc="best", fontsize=8)
378
+ ax.grid(alpha=0.3)
379
+ path = plots_dir / "reward_components.png"
380
+ fig.tight_layout()
381
+ fig.savefig(path, dpi=140)
382
+ plt.close(fig)
383
+ rendered.append(path)
384
+
385
+ if not rendered:
386
+ _log.warning("No PNGs rendered — log_history had no recognised metrics.")
387
+ return
388
+
389
+ _log.info("Rendered %d curve PNG(s) to %s", len(rendered), plots_dir)
390
+
391
+ # Log the PNGs as wandb.Images so they appear in the run's Media tab,
392
+ # and persist to the run summary as a reference table.
393
+ try:
394
+ import wandb
395
+ if wandb.run is not None:
396
+ wandb.log({
397
+ f"plots/{p.stem}": wandb.Image(str(p)) for p in rendered
398
+ })
399
+ _log.info("Logged %d plot(s) to wandb.Media", len(rendered))
400
+ except Exception as exc: # noqa: BLE001
401
+ _log.warning("Could not log plots to wandb: %s", exc)
402
+
403
+ # Push PNGs to the final Hub model repo under ``plots/`` so the model
404
+ # card can render them and ``sync-plots.sh`` can pull them locally.
405
+ if config.push_to_hub and config.hub_repo_id:
406
+ try:
407
+ from huggingface_hub import HfApi, create_repo
408
+
409
+ api = HfApi(token=os.environ.get("HUGGINGFACE_HUB_TOKEN"))
410
+ create_repo(
411
+ repo_id=config.hub_repo_id,
412
+ repo_type="model",
413
+ exist_ok=True,
414
+ token=os.environ.get("HUGGINGFACE_HUB_TOKEN"),
415
+ )
416
+ for p in rendered:
417
+ api.upload_file(
418
+ path_or_fileobj=str(p),
419
+ path_in_repo=f"plots/{p.name}",
420
+ repo_id=config.hub_repo_id,
421
+ repo_type="model",
422
+ commit_message=f"plots: {p.name}",
423
+ )
424
+ _log.info(
425
+ "Pushed %d plot(s) to https://huggingface.co/%s/tree/main/plots",
426
+ len(rendered),
427
+ config.hub_repo_id,
428
+ )
429
+ except Exception as exc: # noqa: BLE001
430
+ _log.warning("Could not push plots to Hub: %s", exc)
431
+
432
+
433
  def _load_model_and_tokenizer(
434
  config: TrainingConfig,
435
  ) -> tuple[FastLanguageModel, AutoTokenizer]:
 
751
 
752
 
753
  def _build_grpo_config(config: TrainingConfig) -> GRPOConfig:
754
+ # Note on the metrics this run will produce in W&B (per TRL docs):
755
+ # train/loss the GRPO surrogate objective being minimized.
756
+ # = -E[advantage * logπ(action|state)] + β * KL.
757
+ # Should DECREASE as the policy exploits advantages.
758
+ # train/reward — mean total reward per rollout. Should INCREASE.
759
+ # train/kl — KL(policy || reference). Bounded by β; grows slowly.
760
+ # rewards/<f>/mean per-component reward (one per reward function).
761
+ #
762
+ # ``train/loss`` going to ~0 *only* if ``train/reward`` rises in lockstep
763
+ # is fine — it just means advantages got fully exploited. Loss collapsing
764
+ # without reward growth is reward hacking, broken parsing, or a saturated
765
+ # KL anchor. We surface both via _log_reward_summary at end of training
766
+ # AND via _GenerateCurvesCallback which renders both curves to PNG.
767
  effective_batch = (
768
  config.per_device_train_batch_size * config.gradient_accumulation_steps
769
  )
physix-live/pyproject.toml CHANGED
@@ -33,6 +33,11 @@ train = [
33
  "wandb>=0.16",
34
  "datasets>=3.0",
35
  "huggingface_hub>=0.24,<1.0",
 
 
 
 
 
36
  ]
37
  demo = ["ollama>=0.4"]
38
 
 
33
  "wandb>=0.16",
34
  "datasets>=3.0",
35
  "huggingface_hub>=0.24,<1.0",
36
+ # Used by physix.training.loop._render_training_curves to write
37
+ # loss / reward / per-component PNGs after GRPO training. Required so
38
+ # the run produces the repo-committable plots that the competition
39
+ # validator checks for.
40
+ "matplotlib>=3.7",
41
  ]
42
  demo = ["ollama>=0.4"]
43