--- license: apache-2.0 base_model: LiquidAI/LFM2-350M library_name: peft pipeline_tag: text-generation tags: - physics - next-frame-prediction - lora - sft - trl - unsloth - icml-2026 --- # lfm2-physics LoRA fine-tune of `LiquidAI/LFM2-350M` for 2D rigid body physics next-frame prediction. Part of an ICML-2026 study comparing fine-tuned LMs vs. from-scratch GPTs on physics trajectory modelling. ## Adapter details - **Base**: `LiquidAI/LFM2-350M` - **Adapter type**: LoRA, r=32, alpha=64, dropout=0.0 - **Target modules**: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj - **Trainer**: `SFTTrainer` (TRL) via Unsloth - **Curriculum**: 5 stages of increasing scene complexity - **Task**: autoregressive next-frame prediction over 200-frame rigid-body scenes ## Stages - `stage0/` ... `stage4/` — checkpoints from each curriculum stage - `final/` — final adapter after all stages Each stage directory contains an Unsloth-saved adapter (`adapter_config.json`, `adapter_model.safetensors`, tokenizer files). ## Usage ```python from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer base = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2-350M") model = PeftModel.from_pretrained(base, "AlexWortega/lfm2-physics", subfolder="final") tokenizer = AutoTokenizer.from_pretrained("AlexWortega/lfm2-physics", subfolder="final") ``` ## Training data Trained on ~900K scenes across 24 "seen" scenario types. See [physics-scenarios-packed](https://huggingface.co/datasets/AlexWortega/physics-scenarios-packed). ## Citation ICML-2026 submission (in progress).