How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for AlexWortega/lfm2-physics to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for AlexWortega/lfm2-physics to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for AlexWortega/lfm2-physics to start chatting
Load model with FastModel
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
    model_name="AlexWortega/lfm2-physics",
    max_seq_length=2048,
)
Quick Links

lfm2-physics

LoRA fine-tune of LiquidAI/LFM2-350M for 2D rigid body physics next-frame prediction. Part of an ICML-2026 study comparing fine-tuned LMs vs. from-scratch GPTs on physics trajectory modelling.

Adapter details

  • Base: LiquidAI/LFM2-350M
  • Adapter type: LoRA, r=32, alpha=64, dropout=0.0
  • Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
  • Trainer: SFTTrainer (TRL) via Unsloth
  • Curriculum: 5 stages of increasing scene complexity
  • Task: autoregressive next-frame prediction over 200-frame rigid-body scenes

Stages

  • stage0/ ... stage4/ โ€” checkpoints from each curriculum stage
  • final/ โ€” final adapter after all stages

Each stage directory contains an Unsloth-saved adapter (adapter_config.json, adapter_model.safetensors, tokenizer files).

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2-350M")
model = PeftModel.from_pretrained(base, "AlexWortega/lfm2-physics", subfolder="final")
tokenizer = AutoTokenizer.from_pretrained("AlexWortega/lfm2-physics", subfolder="final")

Training data

Trained on ~900K scenes across 24 "seen" scenario types. See physics-scenarios-packed.

Citation

ICML-2026 submission (in progress).

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for AlexWortega/lfm2-physics

Adapter
(18)
this model