# Geometry of Decision Making in Language Models
**Abhinav Joshi · Divyanshu Bhatt · Ashutosh Modi**
*NeurIPS 2025*
> This repository contains the official implementation/release for the NeurIPS 2025 paper Geometry of Decision Making in Language Models.
> We study the internal decision-making processes of large language models through the lens of intrinsic dimension (ID), analyzing how hidden representations evolve across layers in a multiple-choice question answering (MCQA) setting.
> This repository provides additional experimental results, intrinsic-dimension estimations, and layer-wise performance analysis, that we believe will be a valuable resource for researchers for further exploration in this area.

**Picture:** *In the transformer-based architectures, a vector (latent features) of the same hidden dimensions $`d`$, is transformed by transformer blocks $`f_l`$. Though the extrinsic dimension remains the same, we find that the feature space lies on low-dimensional manifolds of different intrinsic dimensions. Intrinsically, there exists a mapping $`\phi_l`$ corresponding to each $`f_l`$. We study how these compressed manifolds align with the decision-making process in middle layers. We project the internal representations back to the vocabulary space to inspect the decisiveness. There is a sudden shift in performance that is aligned with the follow-up of a sharp peak observed in the residual-post ID estimates.*
## Abstract
> Large Language Models (LLMs) show strong generalization across diverse tasks,
yet the internal decision-making processes behind their predictions remain opaque.
In this work, we study the geometry of hidden representations in LLMs through
the lens of intrinsic dimension (ID), focusing specifically on decision-making
dynamics in a multiple-choice question answering (MCQA) setting. We perform a
large-scale study, with 28 open-weight transformer models and estimate ID across
layers using multiple estimators, while also quantifying per-layer performance on
MCQA tasks. Our findings reveal a consistent ID pattern across models: early
layers operate on low-dimensional manifolds, middle layers expand this space,
and later layers compress it again, converging to decision-relevant representations.
Together, these results suggest LLMs implicitly learn to project linguistic inputs
onto structured, low-dimensional manifolds aligned with task-specific decisions,
providing new geometric insights into how generalization and reasoning emerge in
language models.
## Structure
```
results/
├── dataset_name/
│ ├── fewshot_xx/
│ │ ├── model_id/
│ │ ├── last_token_relative_depth_xx/
│ │ ├── intrinsic/
│ │ ├── results.csv (global intrinsic estimates)
│ │ ├── mle_local_dims
│ │ ├── layer_name.csv (local intrinsic estimates)
│ │ ├── llm_hook_args.json (arguments used for activation collection)
│ │ ├── llm_intrinsic_args.json (arguments used for ID estimation)
│ │ ├── accuracy_layer_name.json (accuracy per layer using logit lens)
│ │ ├── accuracy_output.json (overall accuracy on the dataset)
│ │ ├── predictions.csv (predictions and corresponding ground truths)
```
## Citation
```
@inproceedings{
joshi2025geometry,
title={Geometry of Decision Making in Language Models},
author={Abhinav Joshi and Divyanshu Bhatt and Ashutosh Modi},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
year={2025},
url={https://openreview.net/forum?id=Jj4NdJtXwp}
}
```