PhysiX-Live
One-line pitch: an OpenEnv RL environment where a small language model iteratively
discovers equations of motion from trajectory data plus a one-sentence English hint —
verifier is scipy.integrate.odeint plus per-step R², no LLM-as-judge in the reward loop.
A submission for the OpenEnv hackathon (Apr 2026). The deliverables are: a clean OpenEnv-compatible env, a TRL+Unsloth+GRPO training pipeline targeting Qwen2.5 (1.5B / 3B profiles) with LoRA, a React + TypeScript + Tailwind demo UI that animates trajectories side-by-side for the trained vs. untrained model, and a recording script for pre-baked demo episodes.
Deliverables
| Deliverable | Where |
|---|---|
| Public HF Space (live demo) | https://huggingface.co/spaces/Pratyush-01/physix-live |
| Training driver script | physix-train/job_train.py — PEP 723 inline-deps UV script, runs end-to-end on hf jobs uv run |
| GRPO training loop | physix/training/loop.py — Unsloth + TRL GRPOTrainer |
| SFT warm-start | physix/training/sft.py |
| Trained adapters (Hub) | Pratyush-01/physix-3b-rl |
| Mid-run checkpoints | Pratyush-01/physix-3b-rl-ckpt |
| W&B project | https://wandb.ai/pratyush01/physix-live |
| Writeup | docs/writeup.md |
Training curves
Both curves are auto-generated at end of every GRPO run by
physix.training.loop._render_training_curves and committed to the repo at
docs/plots/. The interpretation rules:
train/lossis the GRPO surrogate (advantage-weighted log-prob + β·KL). Should trend down as advantages get exploited. (Per TRL docs — this is the full surrogate, not just the KL term.)train/rewardis mean total reward across rollouts. Should trend up. This is the headline curve.- Per-component reward breaks
train/rewardinto the 5 reward functions (match,match_dense,correctness,simplicity,format). Used to spot reward hacking — e.g.simplicityrising whilematchregresses.
Repository layout
physix-live/
├── physix/ # Python package
│ ├── __init__.py # narrow public API
│ ├── models.py # Pydantic Action / Observation / State
│ ├── client.py # OpenEnv WebSocket client subclass
│ ├── systems/ # 8 physical systems in 3 tiers
│ │ ├── base.py # PhysicalSystem ABC + TrajectoryData
│ │ ├── tier1.py # FreeFall, FreeFallWithDrag, SimplePendulum
│ │ ├── tier2.py # DampedPendulum, SpringMass, DampedSpring
│ │ ├── tier3.py # ProjectileWithDrag, ChargedInBField (held out)
│ │ └── registry.py # system_id -> factory mapping
│ ├── verifier/ # scoring pipeline
│ │ ├── parser.py # SymPy whitelisted parser
│ │ ├── simulator.py # scipy.odeint forward sim
│ │ ├── metrics.py # per-step R²
│ │ ├── mismatch.py # English residual summary
│ │ └── reward.py # 4-component reward composition
│ ├── server/ # FastAPI + OpenEnv
│ │ ├── environment.py # PhysiXEnvironment subclass
│ │ ├── interactive.py # session-based REST router (`/interactive/*`)
│ │ └── app.py # FastAPI factory + CLI entry point
│ └── training/ # GRPO training pipeline
│ ├── prompt.py # observation -> prompt, completion -> action
│ ├── scorer.py # single-completion scorer (training + eval)
│ ├── reward_fns.py # TRL-compatible reward callables
│ ├── dataset.py # build training / eval datasets
│ └── loop.py # Unsloth + TRL GRPO loop (cloud A100)
├── frontend/ # React + TS + Tailwind demo UI
│ └── src/
│ ├── App.tsx # tabs: "Run with LLM" + "Manual"
│ ├── components/ # RunWithLlmPane, InteractivePane, …
│ ├── hooks/ # useLlmEpisodeRunner, useInteractiveSession
│ ├── lib/ # interactiveClient, trajectory, format
│ └── types/physix.ts
└── tests/ # full pipeline coverage incl. /interactive/*
What the env does (one episode end-to-end)
sequenceDiagram
participant Agent
participant Env as PhysiXEnvironment
participant Sim as scipy.odeint
participant Verifier
Env->>Agent: reset(): observed trajectory + hint
loop up to 8 turns
Agent->>Env: step(SymPy eqn + params + rationale)
Env->>Sim: simulate from hypothesis
Sim-->>Verifier: predicted trajectory
Verifier-->>Env: r_match + r_progress + r_simplicity + r_format
Env->>Agent: obs (mismatch summary, history) + reward
alt r_match > 0.93 or budget exhausted
Env-->>Agent: done=True
end
end
Action space: the agent emits structured text in a constrained SymPy grammar
(d2y/dt2 = -9.81 + 0.05 * vy**2). Allowed operators: + - * / **. Allowed
functions: sin cos tan exp log sqrt abs. Parse failures score r_format = 0.
Reward: four independent components (each in [0, 1]), weighted into a total.
| Component | Weight | What it measures |
|---|---|---|
r_match |
0.5 | Per-step R² between observed and predicted trajectory |
r_progress |
0.2 | Improvement over prior turn (dense per-turn shaping) |
r_simplicity |
0.2 | 1 − normalised operator count (Occam's razor) |
r_format |
0.1 | Binary: SymPy parses + dimensional consistency |
The reward is fully verifiable — the env never calls an LLM-as-judge.
Quick start
1. Install (Python)
Requires Python 3.10+. Inside a fresh conda env or venv:
pip install -e . # base deps (env server, verifier, client)
pip install -e ".[dev]" # + pytest, ruff
pip install -e ".[demo]" # + ollama (live LLM episodes via /interactive/llm-step)
pip install -e ".[train]" # + torch, transformers, trl, unsloth, wandb
Notes:
[train]requires CUDA. Install it on the cloud A100 box, not on your laptop.[demo]adds theollamaPython client used by the server when the UI's "Run with LLM" pane drives an episode. Startollama serveand pull the base model once withollama pull qwen2.5:1.5b-instruct.- The repo ships a
.vscode/settings.jsonthat pins the workspace's Python interpreter to~/miniconda3/envs/openenv_run/bin/python. If your venv lives somewhere else and your IDE shows "import could not be resolved", update that path or run Python: Select Interpreter from the command palette.
2. Run the test suite
pytest tests/ # 30 tests, ~3 seconds
3. Boot the env server locally
python -m physix.server.app --host 127.0.0.1 --port 8000
# or
uvicorn physix.server.app:app --host 127.0.0.1 --port 8000
The server exposes:
- OpenEnv endpoints:
/reset,/step,/state,/schema,/health. These are stateless — each request gets a fresh env. Fine for headless agents. - A stateful WebSocket at
/ws(used by the PythonPhysiXEnvclient). - A bespoke session-based REST router at
/interactive/*(seephysix/server/interactive.py) used by the demo UI. It maintains in-process sessions so a browser can drive a multi-turn episode by POSTing equations.
CORS is enabled out of the box for http://localhost:5173 (the Vite dev
server). Override with PHYSIX_CORS_ORIGINS=https://your-host.example (or
* for any origin, dev only).
For sustained Python-side interaction use the WebSocket client:
import asyncio
from physix import PhysiXEnv, PhysiXAction
async def main():
async with PhysiXEnv(base_url="http://127.0.0.1:8000") as env:
result = await env.reset(system_id="free_fall_drag", seed=42)
result = await env.step(
PhysiXAction(equation="d2y/dt2 = -9.81 + 0.05 * vy**2")
)
print(result.observation.reward_breakdown)
asyncio.run(main())
4. Run the demo UI
cd frontend
pnpm install
pnpm dev # http://localhost:5173
The UI has two tabs, both backed by the same live env server:
- Run with LLM — pick a system + an Ollama model tag, click ▶ Run, and
watch the model propose ODEs turn-by-turn. Each call hits
POST /interactive/sessions/:id/llm-step, which builds the env's prompt, calls the local Ollama daemon, parses the reply, scores it via the verifier, and streams the resulting turn back to the page. Pause anytime. - Manual — submit equations yourself. No LLM in the loop. Same scoring pipeline, useful for building intuition for the verifier.
The UI expects the env server to be reachable on the URL in
VITE_PHYSIX_API_URL (default http://localhost:8000). For the LLM tab,
you also need a local Ollama daemon (ollama serve) with the model tag
pulled in advance:
ollama pull qwen2.5:1.5b-instruct
# or, after exporting your merged adapter to GGUF and building a Modelfile:
ollama create physix-trained:latest -f Modelfile
There are no pre-recorded episodes to regenerate. Every turn shown in the UI is a real LLM call against the live env.
5. Train (cloud A100)
WANDB_PROJECT=physix-live python -m physix.training.loop \
--model Qwen/Qwen2.5-1.5B-Instruct \
--output-dir runs/physix-1.5b-rl \
--num-steps 300
# Run an ablation:
python -m physix.training.loop --num-steps 300 --ablation no_progress
After training, push the merged adapter to the Hub. By default the loop
saves a merged_16bit artifact (LoRA merged into the base, written as a
standard HF checkpoint) so it can be loaded without Unsloth and exported to
GGUF for Ollama:
python -m physix.training.loop \
--num-steps 300 \
--save-method merged_16bit \
--push-to-hub --hub-repo-id you/physix-1.5b-rl
Pass --save-method lora if you want the small adapter-only artifact
instead. The training loop calls unsloth.PatchFastRL("GRPO", FastLanguageModel)
before importing GRPOTrainer — required for Unsloth's GRPO kernels to be
swapped in.
Adding a new physical system
The framework generalises beyond the 8 shipped systems. Adding a new one is about 50 lines:
# physix/systems/tier2.py (or your own module)
import numpy as np
from physix.systems.base import PhysicalSystem, SystemTier
class CoupledOscillators(PhysicalSystem):
system_id: str = "coupled_oscillators"
tier: SystemTier = SystemTier.TIER_2
state_variables: tuple[str, ...] = ("x1", "vx1", "x2", "vx2")
hint_template: str = "Two masses coupled by a spring; observe both positions."
def sample_parameters(self, rng):
return {"k": float(rng.uniform(2, 10)), "k_c": float(rng.uniform(0.5, 2))}
def sample_initial_conditions(self, rng):
return {"x1": float(rng.uniform(0.5, 1)), "vx1": 0.0, "x2": 0.0, "vx2": 0.0}
def rhs(self, t, state, params):
x1, vx1, x2, vx2 = state
return np.array([
vx1, -params["k"] * x1 + params["k_c"] * (x2 - x1),
vx2, -params["k"] * x2 + params["k_c"] * (x1 - x2),
])
def ground_truth_equation(self) -> str:
return "d2x1/dt2 = -k*x1 + k_c*(x2-x1); d2x2/dt2 = -k*x2 + k_c*(x1-x2)"
PhysicalSystem is a Pydantic model with an ABCMeta mixin — subclasses
declare overridden fields as plain class-level annotations and pydantic
treats them as field overrides. No @dataclass decorator needed.
Then register it in physix/systems/registry.py:
SYSTEM_REGISTRY["coupled_oscillators"] = CoupledOscillators
That's it — the env, parser, simulator, scorer, and training loop all pick it up automatically.
Themes (OpenEnv hackathon rubric)
- Primary: World-Modeling — the agent literally builds an internal model of physical dynamics from data + context, refines it, and is scored against ground truth.
- Primary: Long-Horizon — episodes are 5-8 turns of stateful refinement; earlier hypotheses condition later ones via the prompt history.
- Secondary: Self-Improvement — curriculum from 1-D undamped (Tier 1) through 1-D damped (Tier 2) to 2-D coupled (Tier 3, held out).
Honest framing
We do not claim:
- The env discovers genuinely new physics.
- A 1.5B model beats GPT-4o on equation discovery.
- The model learns physics from scratch.
We do claim:
- The same 1.5B converges in fewer turns after RL training than before.
- The trained model generalises to held-out 2-D systems (Tier 3).
- The trained model uses NL hints meaningfully (ablate the hint, performance drops).
This calibrated framing is part of the storytelling axis (30%) — judges trust self-comparison numbers more than claims to beat frontier models.
License
MIT.


