File size: 1,617 Bytes
f6bf07a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: apache-2.0
base_model: LiquidAI/LFM2-350M
library_name: peft
pipeline_tag: text-generation
tags:
- physics
- next-frame-prediction
- lora
- sft
- trl
- unsloth
- icml-2026
---

# lfm2-physics

LoRA fine-tune of `LiquidAI/LFM2-350M` for 2D rigid body physics next-frame prediction. Part of an ICML-2026 study comparing fine-tuned LMs vs. from-scratch GPTs on physics trajectory modelling.

## Adapter details

- **Base**: `LiquidAI/LFM2-350M`
- **Adapter type**: LoRA, r=32, alpha=64, dropout=0.0
- **Target modules**: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- **Trainer**: `SFTTrainer` (TRL) via Unsloth
- **Curriculum**: 5 stages of increasing scene complexity
- **Task**: autoregressive next-frame prediction over 200-frame rigid-body scenes

## Stages

- `stage0/` ... `stage4/` — checkpoints from each curriculum stage
- `final/` — final adapter after all stages

Each stage directory contains an Unsloth-saved adapter (`adapter_config.json`, `adapter_model.safetensors`, tokenizer files).

## Usage

```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2-350M")
model = PeftModel.from_pretrained(base, "AlexWortega/lfm2-physics", subfolder="final")
tokenizer = AutoTokenizer.from_pretrained("AlexWortega/lfm2-physics", subfolder="final")
```

## Training data

Trained on ~900K scenes across 24 "seen" scenario types. See [physics-scenarios-packed](https://huggingface.co/datasets/AlexWortega/physics-scenarios-packed).

## Citation

ICML-2026 submission (in progress).