--- license: apache-2.0 tags: - lora - fine-tuning - adaptive - research - nested-lora - synaptic-plasticity - rank-adaptation library_name: transformers datasets: - nyu-mll/glue pipeline_tag: text-classification --- # Unified-LoRA **LoRA fine-tuning with synaptic plasticity: a neurobiologically-inspired controller that switches between qualitatively different operational modes based on training stress.** ⚠️ **This is NOT a pretrained model.** Unified-LoRA is a training method/controller. 👉 **Code**: [github.com/Sva76/Unified-LoRa](https://github.com/Sva76/Unified-LoRa) 👉 **Demo**: [unified_lora_demo.ipynb](https://github.com/Sva76/Unified-LoRa/blob/main/notebooks/unified_lora_demo.ipynb) ## What It Does A composite synaptic stress signal **φ(t) = f(Convergence, Entropy, Stress)** drives a 3-state FSM: | Mode | φ range | Rank | Behavior | |------|---------|------|----------| | SINGLE | φ < 0.3 | r=4 | Efficient cruise | | MULTI | 0.3 ≤ φ < 0.7 | r=8 | Active learning | | MIRROR | φ ≥ 0.7 | r=16 | Max capacity + weight snapshot for rollback | Rank transitions use **nested matrix slicing** (r4 ⊂ r8 ⊂ r16) — zero cold-start, zero re-allocation. Mirror mode saves a weight snapshot on entry. On exit, if weights drifted <5% (transient noise), the snapshot is restored. If drift was significant (real signal), the new weights are kept. ## Results **GLUE (DistilBERT):** 3/4 tasks equal or better with 33–56% rank reduction. **Noise resilience:** +31 F1 at 50% label noise, 9× lower variance. No benefit on clean data. Confirmed at 67M–3B. **Stress-recovery cycle (Tinker/Llama-3.2-1B):** φ returns to pre-shock baseline (0.33 → 0.83 → 0.33), demonstrating fully reversible stress handling. ## Quick Start ```python from controller import setup_unified_lora adapters, ctrl = setup_unified_lora(model, target_modules=["q_proj", "v_proj"]) for batch in dataloader: loss = model(**batch).loss loss.backward() ctrl.step(loss=loss.item()) # φ(t) needs the loss for convergence signal optimizer.step() optimizer.zero_grad() ``` ## Citation ```bibtex @software{unified_lora_2025, author = {Simona Vargiu}, title = {Unified-LoRA: Synaptic Plasticity Controller for Adaptive LoRA Fine-Tuning}, year = {2025}, url = {https://github.com/Sva76/Unified-LoRa} } ``` ## Contact Simona Vargiu (Independent Researcher) — simona.vargiu.malta@gmail.com