Papers
arxiv:2604.08816

Loom: A Scalable Analytical Neural Computer Architecture

Published on Apr 9
Authors:

Abstract

Loom is a computer architecture that executes C programs by running transformer model forward passes, where program state is stored in a fixed-size tensor and each instruction corresponds to one forward pass through a fixed-weight transformer model.

AI-generated summary

We present Loom, a computer architecture that executes programs compiled from C inside a looped transformer whose weights are derived analytically. The architecture implements a 22-opcode instruction set in 8 transformer layers. Each forward pass executes one instruction; the model is applied iteratively until the program counter reaches zero. The full machine state resides in a single tensor X in R^{d times n} of fixed size, and every step has fixed cost for fixed d and n, independent of program length or execution history. The default configuration uses d = 155 and n = 1024, yielding 4.7 million parameters and 928 instruction slots. A compact configuration at d = 146 and n = 512 suffices for a 9times9 Sudoku solver (284 instructions). The weights are program-independent: programs live in the state tensor, and the same fixed-weight model executes any compiled program. We make Loom source code publicly available at https://github.com/mkturkcan/Loom.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.08816
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.08816 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.08816 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.