File size: 14,171 Bytes
d5f6dbd 70b3d0c d5f6dbd 70b3d0c d5f6dbd | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 | # PhysiX-Live
**One-line pitch:** an OpenEnv RL environment where a small language model iteratively
discovers equations of motion from trajectory data plus a one-sentence English hint —
verifier is `scipy.integrate.odeint` plus per-step R², no LLM-as-judge in the reward loop.
A submission for the **OpenEnv hackathon** (Apr 2026). The deliverables are: a clean
OpenEnv-compatible env, a TRL+Unsloth+GRPO training pipeline targeting Qwen2.5 (1.5B / 3B
profiles) with LoRA, a React + TypeScript + Tailwind demo UI that animates trajectories
side-by-side for the trained vs. untrained model, and a recording script for pre-baked
demo episodes.
---
## Deliverables
| Deliverable | Where |
| ---------------------------- | ---------------------------------------------------------------------------------- |
| **Public HF Space (live demo)** | https://huggingface.co/spaces/Pratyush-01/physix-live |
| **Training driver script** | [`physix-train/job_train.py`](../physix-train/job_train.py) — PEP 723 inline-deps UV script, runs end-to-end on `hf jobs uv run` |
| **GRPO training loop** | [`physix/training/loop.py`](physix/training/loop.py) — Unsloth + TRL GRPOTrainer |
| **SFT warm-start** | [`physix/training/sft.py`](physix/training/sft.py) |
| **Trained adapters (Hub)** | [`Pratyush-01/physix-3b-rl`](https://huggingface.co/Pratyush-01/physix-3b-rl) |
| **Mid-run checkpoints** | [`Pratyush-01/physix-3b-rl-ckpt`](https://huggingface.co/Pratyush-01/physix-3b-rl-ckpt) |
| **W&B project** | https://wandb.ai/pratyush01/physix-live |
| **Writeup** | [`docs/writeup.md`](docs/writeup.md) |
## Training curves
Both curves are auto-generated at end of every GRPO run by
`physix.training.loop._render_training_curves` and committed to the repo at
`docs/plots/`. The interpretation rules:
- **`train/loss`** is the GRPO surrogate (advantage-weighted log-prob + β·KL).
Should trend **down** as advantages get exploited. (Per
[TRL docs](https://huggingface.co/docs/trl/main/logging) — this is the full
surrogate, not just the KL term.)
- **`train/reward`** is mean total reward across rollouts. Should trend **up**.
This is the headline curve.
- **Per-component reward** breaks `train/reward` into the 5 reward functions
(`match`, `match_dense`, `correctness`, `simplicity`, `format`). Used to spot
reward hacking — e.g. `simplicity` rising while `match` regresses.
| Loss (down is good) | Reward (up is good) |
| --- | --- |
|  |  |
| Per-component reward (anti-hack diagnostic) |
| --- |
|  |
---
## Repository layout
```
physix-live/
├── physix/ # Python package
│ ├── __init__.py # narrow public API
│ ├── models.py # Pydantic Action / Observation / State
│ ├── client.py # OpenEnv WebSocket client subclass
│ ├── systems/ # 8 physical systems in 3 tiers
│ │ ├── base.py # PhysicalSystem ABC + TrajectoryData
│ │ ├── tier1.py # FreeFall, FreeFallWithDrag, SimplePendulum
│ │ ├── tier2.py # DampedPendulum, SpringMass, DampedSpring
│ │ ├── tier3.py # ProjectileWithDrag, ChargedInBField (held out)
│ │ └── registry.py # system_id -> factory mapping
│ ├── verifier/ # scoring pipeline
│ │ ├── parser.py # SymPy whitelisted parser
│ │ ├── simulator.py # scipy.odeint forward sim
│ │ ├── metrics.py # per-step R²
│ │ ├── mismatch.py # English residual summary
│ │ └── reward.py # 4-component reward composition
│ ├── server/ # FastAPI + OpenEnv
│ │ ├── environment.py # PhysiXEnvironment subclass
│ │ ├── interactive.py # session-based REST router (`/interactive/*`)
│ │ └── app.py # FastAPI factory + CLI entry point
│ └── training/ # GRPO training pipeline
│ ├── prompt.py # observation -> prompt, completion -> action
│ ├── scorer.py # single-completion scorer (training + eval)
│ ├── reward_fns.py # TRL-compatible reward callables
│ ├── dataset.py # build training / eval datasets
│ └── loop.py # Unsloth + TRL GRPO loop (cloud A100)
├── frontend/ # React + TS + Tailwind demo UI
│ └── src/
│ ├── App.tsx # tabs: "Run with LLM" + "Manual"
│ ├── components/ # RunWithLlmPane, InteractivePane, …
│ ├── hooks/ # useLlmEpisodeRunner, useInteractiveSession
│ ├── lib/ # interactiveClient, trajectory, format
│ └── types/physix.ts
└── tests/ # full pipeline coverage incl. /interactive/*
```
---
## What the env does (one episode end-to-end)
```mermaid
sequenceDiagram
participant Agent
participant Env as PhysiXEnvironment
participant Sim as scipy.odeint
participant Verifier
Env->>Agent: reset(): observed trajectory + hint
loop up to 8 turns
Agent->>Env: step(SymPy eqn + params + rationale)
Env->>Sim: simulate from hypothesis
Sim-->>Verifier: predicted trajectory
Verifier-->>Env: r_match + r_progress + r_simplicity + r_format
Env->>Agent: obs (mismatch summary, history) + reward
alt r_match > 0.93 or budget exhausted
Env-->>Agent: done=True
end
end
```
**Action space:** the agent emits structured text in a constrained SymPy grammar
(`d2y/dt2 = -9.81 + 0.05 * vy**2`). Allowed operators: `+ - * / **`. Allowed
functions: `sin cos tan exp log sqrt abs`. Parse failures score `r_format = 0`.
**Reward:** four independent components (each in `[0, 1]`), weighted into a total.
| Component | Weight | What it measures |
|---|---:|---|
| `r_match` | 0.5 | Per-step R² between observed and predicted trajectory |
| `r_progress` | 0.2 | Improvement over prior turn (dense per-turn shaping) |
| `r_simplicity` | 0.2 | 1 − normalised operator count (Occam's razor) |
| `r_format` | 0.1 | Binary: SymPy parses + dimensional consistency |
The reward is fully verifiable — the env never calls an LLM-as-judge.
---
## Quick start
### 1. Install (Python)
Requires Python 3.10+. Inside a fresh conda env or venv:
```bash
pip install -e . # base deps (env server, verifier, client)
pip install -e ".[dev]" # + pytest, ruff
pip install -e ".[demo]" # + ollama (live LLM episodes via /interactive/llm-step)
pip install -e ".[train]" # + torch, transformers, trl, unsloth, wandb
```
Notes:
- `[train]` requires CUDA. Install it on the cloud A100 box, not on your laptop.
- `[demo]` adds the `ollama` Python client used by the server when the UI's
"Run with LLM" pane drives an episode. Start `ollama serve` and pull the
base model once with `ollama pull qwen2.5:1.5b-instruct`.
- The repo ships a `.vscode/settings.json` that pins the workspace's Python
interpreter to `~/miniconda3/envs/openenv_run/bin/python`. If your venv
lives somewhere else and your IDE shows "import could not be resolved",
update that path or run **Python: Select Interpreter** from the command
palette.
### 2. Run the test suite
```bash
pytest tests/ # 30 tests, ~3 seconds
```
### 3. Boot the env server locally
```bash
python -m physix.server.app --host 127.0.0.1 --port 8000
# or
uvicorn physix.server.app:app --host 127.0.0.1 --port 8000
```
The server exposes:
- OpenEnv endpoints: `/reset`, `/step`, `/state`, `/schema`, `/health`. These
are stateless — each request gets a fresh env. Fine for headless agents.
- A stateful WebSocket at `/ws` (used by the Python `PhysiXEnv` client).
- A bespoke session-based REST router at `/interactive/*` (see
`physix/server/interactive.py`) used by the demo UI. It maintains
in-process sessions so a browser can drive a multi-turn episode by
POSTing equations.
CORS is enabled out of the box for `http://localhost:5173` (the Vite dev
server). Override with `PHYSIX_CORS_ORIGINS=https://your-host.example` (or
`*` for any origin, dev only).
For sustained Python-side interaction use the WebSocket client:
```python
import asyncio
from physix import PhysiXEnv, PhysiXAction
async def main():
async with PhysiXEnv(base_url="http://127.0.0.1:8000") as env:
result = await env.reset(system_id="free_fall_drag", seed=42)
result = await env.step(
PhysiXAction(equation="d2y/dt2 = -9.81 + 0.05 * vy**2")
)
print(result.observation.reward_breakdown)
asyncio.run(main())
```
### 4. Run the demo UI
```bash
cd frontend
pnpm install
pnpm dev # http://localhost:5173
```
The UI has two tabs, both backed by the same live env server:
- **Run with LLM** — pick a system + an Ollama model tag, click ▶ Run, and
watch the model propose ODEs turn-by-turn. Each call hits
`POST /interactive/sessions/:id/llm-step`, which builds the env's prompt,
calls the local Ollama daemon, parses the reply, scores it via the
verifier, and streams the resulting turn back to the page. Pause anytime.
- **Manual** — submit equations yourself. No LLM in the loop. Same scoring
pipeline, useful for building intuition for the verifier.
The UI expects the env server to be reachable on the URL in
`VITE_PHYSIX_API_URL` (default `http://localhost:8000`). For the LLM tab,
you also need a local Ollama daemon (`ollama serve`) with the model tag
pulled in advance:
```bash
ollama pull qwen2.5:1.5b-instruct
# or, after exporting your merged adapter to GGUF and building a Modelfile:
ollama create physix-trained:latest -f Modelfile
```
There are no pre-recorded episodes to regenerate. Every turn shown in the
UI is a real LLM call against the live env.
### 5. Train (cloud A100)
```bash
WANDB_PROJECT=physix-live python -m physix.training.loop \
--model Qwen/Qwen2.5-1.5B-Instruct \
--output-dir runs/physix-1.5b-rl \
--num-steps 300
# Run an ablation:
python -m physix.training.loop --num-steps 300 --ablation no_progress
```
After training, push the merged adapter to the Hub. By default the loop
saves a `merged_16bit` artifact (LoRA merged into the base, written as a
standard HF checkpoint) so it can be loaded without Unsloth and exported to
GGUF for Ollama:
```bash
python -m physix.training.loop \
--num-steps 300 \
--save-method merged_16bit \
--push-to-hub --hub-repo-id you/physix-1.5b-rl
```
Pass `--save-method lora` if you want the small adapter-only artifact
instead. The training loop calls `unsloth.PatchFastRL("GRPO", FastLanguageModel)`
before importing `GRPOTrainer` — required for Unsloth's GRPO kernels to be
swapped in.
---
## Adding a new physical system
The framework generalises beyond the 8 shipped systems. Adding a new one is
about 50 lines:
```python
# physix/systems/tier2.py (or your own module)
import numpy as np
from physix.systems.base import PhysicalSystem, SystemTier
class CoupledOscillators(PhysicalSystem):
system_id: str = "coupled_oscillators"
tier: SystemTier = SystemTier.TIER_2
state_variables: tuple[str, ...] = ("x1", "vx1", "x2", "vx2")
hint_template: str = "Two masses coupled by a spring; observe both positions."
def sample_parameters(self, rng):
return {"k": float(rng.uniform(2, 10)), "k_c": float(rng.uniform(0.5, 2))}
def sample_initial_conditions(self, rng):
return {"x1": float(rng.uniform(0.5, 1)), "vx1": 0.0, "x2": 0.0, "vx2": 0.0}
def rhs(self, t, state, params):
x1, vx1, x2, vx2 = state
return np.array([
vx1, -params["k"] * x1 + params["k_c"] * (x2 - x1),
vx2, -params["k"] * x2 + params["k_c"] * (x1 - x2),
])
def ground_truth_equation(self) -> str:
return "d2x1/dt2 = -k*x1 + k_c*(x2-x1); d2x2/dt2 = -k*x2 + k_c*(x1-x2)"
```
`PhysicalSystem` is a Pydantic model with an `ABCMeta` mixin — subclasses
declare overridden fields as plain class-level annotations and pydantic
treats them as field overrides. No `@dataclass` decorator needed.
Then register it in `physix/systems/registry.py`:
```python
SYSTEM_REGISTRY["coupled_oscillators"] = CoupledOscillators
```
That's it — the env, parser, simulator, scorer, and training loop all pick it
up automatically.
---
## Themes (OpenEnv hackathon rubric)
- **Primary: World-Modeling** — the agent literally builds an internal model
of physical dynamics from data + context, refines it, and is scored against
ground truth.
- **Primary: Long-Horizon** — episodes are 5-8 turns of stateful refinement;
earlier hypotheses condition later ones via the prompt history.
- **Secondary: Self-Improvement** — curriculum from 1-D undamped (Tier 1)
through 1-D damped (Tier 2) to 2-D coupled (Tier 3, held out).
---
## Honest framing
We do **not** claim:
- The env discovers genuinely new physics.
- A 1.5B model beats GPT-4o on equation discovery.
- The model learns physics from scratch.
We **do** claim:
- The same 1.5B converges in fewer turns *after* RL training than *before*.
- The trained model generalises to held-out 2-D systems (Tier 3).
- The trained model uses NL hints meaningfully (ablate the hint, performance drops).
This calibrated framing is part of the storytelling axis (30%) — judges trust
self-comparison numbers more than claims to beat frontier models.
---
## License
MIT.
|