Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey Paper • 2403.14608 • Published Mar 21, 2024
Towards Better Parameter-Efficient Fine-Tuning for Large Language Models: A Position Paper Paper • 2311.13126 • Published Nov 22, 2023 • 1
Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language Models Paper • 2409.09510 • Published Sep 14, 2024
Increasing Model Capacity for Free: A Simple Strategy for Parameter Efficient Fine-tuning Paper • 2407.01320 • Published Jul 1, 2024
Introducing Routing Functions to Vision-Language Parameter-Efficient Fine-Tuning with Low-Rank Bottlenecks Paper • 2403.09377 • Published Mar 14, 2024 • 1
AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning Paper • 2301.12132 • Published Jan 28, 2023 • 2
Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning Paper • 2306.00477 • Published Jun 1, 2023 • 2
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning Paper • 2303.15647 • Published Mar 28, 2023 • 4
Hydra: Multi-head Low-rank Adaptation for Parameter Efficient Fine-tuning Paper • 2309.06922 • Published Sep 13, 2023 • 1
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning Paper • 2306.07967 • Published Jun 13, 2023 • 26
LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery Paper • 2310.18356 • Published Oct 24, 2023 • 24
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models Paper • 2310.08659 • Published Oct 12, 2023 • 29
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models Paper • 2309.12307 • Published Sep 21, 2023 • 89
LoRAPrune: Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning Paper • 2305.18403 • Published May 28, 2023 • 3
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning Paper • 2205.05638 • Published May 11, 2022 • 6
DEFT: Data Efficient Fine-Tuning for Large Language Models via Unsupervised Core-Set Selection Paper • 2310.16776 • Published Oct 25, 2023
A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA Paper • 2312.03732 • Published Nov 28, 2023 • 12
MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models Paper • 2402.12851 • Published Feb 20, 2024 • 2
NOLA: Networks as Linear Combination of Low Rank Random Basis Paper • 2310.02556 • Published Oct 4, 2023 • 2
LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning Paper • 2308.03303 • Published Aug 7, 2023 • 3
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning Paper • 2309.05173 • Published Sep 11, 2023 • 1
In-context Autoencoder for Context Compression in a Large Language Model Paper • 2307.06945 • Published Jul 13, 2023 • 29
A Unified Generative Retriever for Knowledge-Intensive Language Tasks via Prompt Learning Paper • 2304.14856 • Published Apr 28, 2023 • 1
Multi-Head Adapter Routing for Cross-Task Generalization Paper • 2211.03831 • Published Nov 7, 2022 • 2
TART: A plug-and-play Transformer module for task-agnostic reasoning Paper • 2306.07536 • Published Jun 13, 2023 • 12
Sparse Finetuning for Inference Acceleration of Large Language Models Paper • 2310.06927 • Published Oct 10, 2023 • 15
MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning Paper • 2405.12130 • Published May 20, 2024 • 50
SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining Paper • 2406.02214 • Published Jun 4, 2024
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters Paper • 2405.17604 • Published May 27, 2024 • 3
VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks Paper • 2405.15179 • Published May 24, 2024 • 1
Bridging The Gap between Low-rank and Orthogonal Adaptation via Householder Reflection Adaptation Paper • 2405.17484 • Published May 24, 2024 • 2
ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections Paper • 2405.20271 • Published May 30, 2024
SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs Paper • 2405.16325 • Published May 25, 2024 • 1
SinkLoRA: Enhanced Efficiency and Chat Capabilities for Long-Context Large Language Models Paper • 2406.05678 • Published Jun 9, 2024 • 1
ShareLoRA: Parameter Efficient and Robust Large Language Model Fine-tuning via Shared Low-Rank Adaptation Paper • 2406.10785 • Published Jun 16, 2024 • 1
SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models Paper • 2405.16057 • Published May 25, 2024
GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning Paper • 2505.20355 • Published May 26, 2025 • 36
ElaLoRA: Elastic & Learnable Low-Rank Adaptation for Efficient Model Fine-Tuning Paper • 2504.00254 • Published Mar 31, 2025 • 1
OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning Paper • 2405.18380 • Published May 28, 2024 • 1
RoseLoRA: Row and Column-wise Sparse Low-rank Adaptation of Pre-trained Language Model for Knowledge Editing and Fine-tuning Paper • 2406.10777 • Published Jun 16, 2024 • 2
OLoRA: Orthonormal Low-Rank Adaptation of Large Language Models Paper • 2406.01775 • Published Jun 3, 2024 • 3
Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning Paper • 2406.03792 • Published Jun 6, 2024
LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models Paper • 2403.08822 • Published Feb 28, 2024
PeftCD: Leveraging Vision Foundation Models with Parameter-Efficient Fine-Tuning for Remote Sensing Change Detection Paper • 2509.09572 • Published Sep 11, 2025
High-Rank Structured Modulation for Parameter-Efficient Fine-Tuning Paper • 2601.07507 • Published Jan 12
QWHA: Quantization-Aware Walsh-Hadamard Adaptation for Parameter-Efficient Fine-Tuning on Large Language Models Paper • 2509.17428 • Published Sep 22, 2025 • 9
LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning Paper • 2502.06820 • Published Feb 5, 2025
Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning Paper • 2406.16257 • Published Jun 24, 2024 • 1
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models Paper • 2106.10199 • Published Jun 18, 2021
Position-Aware Parameter Efficient Fine-Tuning Approach for Reducing Positional Bias in LLMs Paper • 2404.01430 • Published Apr 1, 2024
Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates Paper • 2406.13046 • Published Jun 18, 2024
Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large Language Models Paper • 2408.14470 • Published Aug 26, 2024
DropLoRA: Sparse Low-Rank Adaptation for Parameter-Efficient Fine-Tuning Paper • 2508.17337 • Published Aug 24, 2025
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models Paper • 2403.13269 • Published Mar 20, 2024 • 1
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation Paper • 2401.04679 • Published Jan 9, 2024 • 2
RandLoRA: Full-rank parameter-efficient fine-tuning of large models Paper • 2502.00987 • Published Feb 3, 2025 • 9
SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models Paper • 2308.06522 • Published Aug 12, 2023
PERFT: Parameter-Efficient Routed Fine-Tuning for Mixture-of-Expert Model Paper • 2411.08212 • Published Nov 12, 2024
Adaptive Parameter-Efficient Federated Fine-Tuning on Heterogeneous Devices Paper • 2412.20004 • Published Dec 28, 2024
ASLoRA: Adaptive Sharing Low-Rank Adaptation Across Layers Paper • 2412.10135 • Published Dec 13, 2024
DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank Distribution Paper • 2405.17357 • Published May 27, 2024
SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors Paper • 2405.19597 • Published May 30, 2024
DiffoRA: Enabling Parameter-Efficient LLM Fine-Tuning via Differential Low-Rank Matrix Adaptation Paper • 2502.08905 • Published Feb 13, 2025
IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-tuning Paper • 2308.12043 • Published Aug 23, 2023 • 1
Gradient-based Parameter Selection for Efficient Fine-Tuning Paper • 2312.10136 • Published Dec 15, 2023 • 1
NeuroAda: Activating Each Neuron's Potential for Parameter-Efficient Fine-Tuning Paper • 2510.18940 • Published Oct 21, 2025 • 9
LoFT: Parameter-Efficient Fine-Tuning for Long-tailed Semi-Supervised Learning in Open-World Scenarios Paper • 2509.09926 • Published Sep 12, 2025 • 14
Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-Tuning Paper • 2402.17263 • Published Feb 27, 2024
Q-PEFT: Query-dependent Parameter Efficient Fine-tuning for Text Reranking with Large Language Models Paper • 2404.04522 • Published Apr 6, 2024
Structured Unrestricted-Rank Matrices for Parameter Efficient Fine-tuning Paper • 2406.17740 • Published Jun 25, 2024
SVFit: Parameter-Efficient Fine-Tuning of Large Pre-Trained Models Using Singular Values Paper • 2409.05926 • Published Sep 9, 2024
TriAdaptLoRA: Brain-Inspired Triangular Adaptive Low-Rank Adaptation for Parameter-Efficient Fine-Tuning Paper • 2501.08008 • Published Jan 14, 2025
Parameter-Efficient Fine-Tuning of Large Language Models via Deconvolution in Subspace Paper • 2503.01419 • Published Mar 3, 2025
ReCIT: Reconstructing Full Private Data from Gradient in Parameter-Efficient Fine-Tuning of Large Language Models Paper • 2504.20570 • Published Apr 29, 2025
Exploring Sparsity for Parameter Efficient Fine Tuning Using Wavelets Paper • 2505.12532 • Published May 18, 2025
C-LoRA: Continual Low-Rank Adaptation for Pre-trained Models Paper • 2502.17920 • Published Feb 25, 2025
Adapt Once, Thrive with Updates: Transferable Parameter-Efficient Fine-Tuning on Evolving Base Models Paper • 2506.06844 • Published Jun 7, 2025
Towards Higher Effective Rank in Parameter-efficient Fine-tuning using Khatri--Rao Product Paper • 2508.00230 • Published Aug 1, 2025
Parameter-Efficient Fine-Tuning with Layer Pruning on Free-Text Sequence-to-Sequence Modeling Paper • 2305.08285 • Published May 15, 2023 • 1
Non-Intrusive Adaptation: Input-Centric Parameter-efficient Fine-Tuning for Versatile Multimodal Modeling Paper • 2310.12100 • Published Oct 18, 2023 • 1