Unified-LoRA

LoRA fine-tuning with synaptic plasticity: a neurobiologically-inspired controller that switches between qualitatively different operational modes based on training stress.

⚠️ This is NOT a pretrained model. Unified-LoRA is a training method/controller.

πŸ‘‰ Code: github.com/Sva76/Unified-LoRa πŸ‘‰ Demo: unified_lora_demo.ipynb

What It Does

A composite synaptic stress signal Ο†(t) = f(Convergence, Entropy, Stress) drives a 3-state FSM:

Mode Ο† range Rank Behavior
SINGLE Ο† < 0.3 r=4 Efficient cruise
MULTI 0.3 ≀ Ο† < 0.7 r=8 Active learning
MIRROR Ο† β‰₯ 0.7 r=16 Max capacity + weight snapshot for rollback

Rank transitions use nested matrix slicing (r4 βŠ‚ r8 βŠ‚ r16) β€” zero cold-start, zero re-allocation.

Mirror mode saves a weight snapshot on entry. On exit, if weights drifted <5% (transient noise), the snapshot is restored. If drift was significant (real signal), the new weights are kept.

Results

GLUE (DistilBERT): 3/4 tasks equal or better with 33–56% rank reduction.

Noise resilience: +31 F1 at 50% label noise, 9Γ— lower variance. No benefit on clean data. Confirmed at 67M–3B.

Stress-recovery cycle (Tinker/Llama-3.2-1B): Ο† returns to pre-shock baseline (0.33 β†’ 0.83 β†’ 0.33), demonstrating fully reversible stress handling.

Quick Start

from controller import setup_unified_lora

adapters, ctrl = setup_unified_lora(model, target_modules=["q_proj", "v_proj"])

for batch in dataloader:
    loss = model(**batch).loss
    loss.backward()
    ctrl.step(loss=loss.item())  # Ο†(t) needs the loss for convergence signal
    optimizer.step()
    optimizer.zero_grad()

Citation

@software{unified_lora_2025,
  author = {Simona Vargiu},
  title = {Unified-LoRA: Synaptic Plasticity Controller for Adaptive LoRA Fine-Tuning},
  year = {2025},
  url = {https://github.com/Sva76/Unified-LoRa}
}

Contact

Simona Vargiu (Independent Researcher) β€” simona.vargiu.malta@gmail.com

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train Simo76/Unified-LoRA

Collection including Simo76/Unified-LoRA