File size: 1,741 Bytes
80f67d3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | ---
license: mit
pipeline_tag: text-generation
---
# Solve the Loop: Attractor Models for Language and Reasoning
Attractor Models are a family of models that use a backbone module to propose output embeddings, followed by an attractor module that refines them by solving for a fixed point using implicit differentiation. This architecture allows for iterative refinement with constant training memory and adaptive inference-time computation.
[**Project Page**](https://attractor-models.github.io/) | [**Paper (arXiv:2605.12466)**](https://arxiv.org/abs/2605.12466) | [**GitHub**](https://github.com/jacobfa/Attractor)
## Introduction
Attractor Models offer a promising alternative to feed-forward computation by iteratively refining latent representations. In language modeling, Attractor Models deliver a Pareto improvement over standard Transformers, improving perplexity and downstream accuracy while reducing training cost. This repository contains the **Attractor-370M** model.
## Sample Usage
To use this model, you need to install the `attractor` package from the [official repository](https://github.com/jacobfa/Attractor):
```bash
git clone https://github.com/jacobfa/Attractor
cd Attractor
pip install -e .
```
Then, you can construct the model in Python:
```python
from attractor.models.attractor import Attractor, AttractorConfig
# Loading the configuration for the 370M model
config = AttractorConfig.from_name("attractor-medium-370m")
model = config.construct_model()
```
## Citation
```bibtex
@article{feinashley2026attractor,
title={Solve the Loop: Attractor Models for Language and Reasoning},
author={Fein-Ashley, Jacob and Rashidinejad, Paria},
year={2026},
url={https://arxiv.org/abs/2605.12466}
}
``` |