-
LoRA vs Full Fine-tuning: An Illusion of Equivalence
Paper • 2410.21228 • Published • 3 -
Artificial Entanglement in the Fine-Tuning of Large Language Models
Paper • 2601.06788 • Published • 5 -
DiffoRA: Enabling Parameter-Efficient LLM Fine-Tuning via Differential Low-Rank Matrix Adaptation
Paper • 2502.08905 • Published -
LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently
Paper • 2502.01235 • Published • 1
Ruby Carbuncle
Tcarbuncle
AI & ML interests
None yet
Recent Activity
updated a collection 7 days ago
Lora updated a collection 7 days ago
Lora updated a collection 7 days ago
LoraOrganizations
None yet