Artificial Entanglement in the Fine-Tuning of Large Language Models Paper • 2601.06788 • Published Jan 11 • 5
DiffoRA: Enabling Parameter-Efficient LLM Fine-Tuning via Differential Low-Rank Matrix Adaptation Paper • 2502.08905 • Published Feb 13, 2025
LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently Paper • 2502.01235 • Published Feb 3, 2025 • 1
Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models Paper • 2502.13533 • Published Feb 19, 2025 • 13
Learning Rate Matters: Vanilla LoRA May Suffice for LLM Fine-tuning Paper • 2602.04998 • Published Feb 4 • 6
LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B Paper • 2310.20624 • Published Oct 31, 2023 • 13