Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
Paper • 2306.00477 • Published • 2
Natural Language Processing/Generation (NLP/G)
NeuroAda: Activating Each Neuron's Potential for Parameter-Efficient Fine-Tuning
ChatR1: Reinforcement Learning for Conversational Reasoning and Retrieval Augmented Question Answering