FAAST: Forward-Only Associative Learning via Closed-Form Fast Weights for Test-Time Supervised Adaptation
Abstract
FAAST enables efficient task adaptation by compiling labeled examples into fast weights through forward-only computation, achieving significant speedup and memory savings over traditional backpropagation methods.
Adapting pretrained models typically involves a trade-off between the high training costs of backpropagation and the heavy inference overhead of memory-based or in-context learning. We propose FAAST, a forward-only associative adaptation method that analytically compiles labeled examples into fast weights in a single pass. By eliminating memory or context dependence, FAAST achieves constant-time inference and decouples task adaptation from pretrained representation. Across image classification and language modeling benchmarks, FAAST matches or exceeds backprop-based adaptation while reducing adaptation time by over 90% and is competitive to memory/context-based adaptation while saving memory usage by up to 95%. These results demonstrate FAAST as a highly efficient, scalable solution for supervised task adaptation, particularly for resource-constrained models. We release the code and models at https://github.com/baoguangsheng/faast.
Community
FAAST compiles labeled examples into fast weights in a single forward pass, eliminating memory or context dependencies and achieving constant-time inference.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Test-Time Instance-Specific Parameter Composition: A New Paradigm for Adaptive Generative Modeling (2026)
- Learning When to Attend: Conditional Memory Access for Long-Context LLMs (2026)
- OASIS: Online Activation Subspace Learning for Memory-Efficient Training (2026)
- Mixture of Chapters: Scaling Learnt Memory in Transformers (2026)
- Training-Free Test-Time Contrastive Learning for Large Language Models (2026)
- Cross-Modal Bayesian Low-Rank Adaptation for Uncertainty-Aware Multimodal Learning (2026)
- Improving Sparse Memory Finetuning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2605.04651 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper