| <div align="center"> | |
| # Geometry of Decision Making in Language Models | |
| **Abhinav Joshi · Divyanshu Bhatt · Ashutosh Modi** | |
| *NeurIPS 2025* | |
| </div> | |
| <div align="center"> | |
| <br/> | |
| <p align="center"> | |
| <a href="https://arxiv.org/pdf/2511.20315"> | |
| <img src="https://img.shields.io/badge/arXiv-2511.20315-6A1B9A?style=for-the-badge&logo=arxiv&logoColor=white"/> | |
| </a> | |
| <a href="https://huggingface.co/datasets/Exploration-Lab/dim-discovery-archive"> | |
| <img src="https://img.shields.io/badge/Experimental Findings-Hugging%20Face-FFB000?style=for-the-badge&logo=huggingface&logoColor=black"/> | |
| </p> | |
| <a href="https://openreview.net/forum?id=Jj4NdJtXwp"> | |
| <img src="https://img.shields.io/badge/OpenReview-NeurIPS%202025-512DA8?style=for-the-badge"/> | |
| </a> | |
| <a href="https://github.com/Exploration-Lab/dim-discovery-archive"> | |
| <img src="https://img.shields.io/badge/Code-GitHub-311B92?style=for-the-badge&logo=github&logoColor=white"/> | |
| </a> | |
| <a href="https://aboxspaces.github.io"> | |
| <img src="https://img.shields.io/badge/AboxSpaces-Blog-3949AB?style=for-the-badge&logo=github&logoColor=white"/> | |
| </a> | |
| <a href="LICENSE"> | |
| <img src="https://img.shields.io/badge/License-MIT-1E88E5?style=for-the-badge&logo=opensourceinitiative&logoColor=white"/> | |
| </a> | |
| <br/><br/> | |
| </div> | |
| <div align="justify"> | |
| > This repository contains the official implementation/release for the NeurIPS 2025 paper Geometry of Decision Making in Language Models. | |
| > We study the internal decision-making processes of large language models through the lens of intrinsic dimension (ID), analyzing how hidden representations evolve across layers in a multiple-choice question answering (MCQA) setting. | |
| > This repository provides additional experimental results, intrinsic-dimension estimations, and layer-wise performance analysis, that we believe will be a valuable resource for researchers for further exploration in this area. | |
| </div> | |
| <div align="justify"> | |
|  | |
| **Picture:** *In the transformer-based architectures, a vector (latent features) of the same hidden dimensions $`d`$, is transformed by transformer blocks $`f_l`$. Though the extrinsic dimension remains the same, we find that the feature space lies on low-dimensional manifolds of different intrinsic dimensions. Intrinsically, there exists a mapping $`\phi_l`$ corresponding to each $`f_l`$. We study how these compressed manifolds align with the decision-making process in middle layers. We project the internal representations back to the vocabulary space to inspect the decisiveness. There is a sudden shift in performance that is aligned with the follow-up of a sharp peak observed in the residual-post ID estimates.* | |
| </div> | |
| ## Abstract | |
| > Large Language Models (LLMs) show strong generalization across diverse tasks, | |
| yet the internal decision-making processes behind their predictions remain opaque. | |
| In this work, we study the geometry of hidden representations in LLMs through | |
| the lens of intrinsic dimension (ID), focusing specifically on decision-making | |
| dynamics in a multiple-choice question answering (MCQA) setting. We perform a | |
| large-scale study, with 28 open-weight transformer models and estimate ID across | |
| layers using multiple estimators, while also quantifying per-layer performance on | |
| MCQA tasks. Our findings reveal a consistent ID pattern across models: early | |
| layers operate on low-dimensional manifolds, middle layers expand this space, | |
| and later layers compress it again, converging to decision-relevant representations. | |
| Together, these results suggest LLMs implicitly learn to project linguistic inputs | |
| onto structured, low-dimensional manifolds aligned with task-specific decisions, | |
| providing new geometric insights into how generalization and reasoning emerge in | |
| language models. | |
| ## Structure | |
| ``` | |
| results/ | |
| ├── dataset_name/ | |
| │ ├── fewshot_xx/ | |
| │ │ ├── model_id/ | |
| │ │ ├── last_token_relative_depth_xx/ | |
| │ │ ├── intrinsic/ | |
| │ │ ├── results.csv (global intrinsic estimates) | |
| │ │ ├── mle_local_dims | |
| │ │ ├── layer_name.csv (local intrinsic estimates) | |
| │ │ ├── llm_hook_args.json (arguments used for activation collection) | |
| │ │ ├── llm_intrinsic_args.json (arguments used for ID estimation) | |
| │ │ ├── accuracy_layer_name.json (accuracy per layer using logit lens) | |
| │ │ ├── accuracy_output.json (overall accuracy on the dataset) | |
| │ │ ├── predictions.csv (predictions and corresponding ground truths) | |
| ``` | |
| ## Citation | |
| ``` | |
| @inproceedings{ | |
| joshi2025geometry, | |
| title={Geometry of Decision Making in Language Models}, | |
| author={Abhinav Joshi and Divyanshu Bhatt and Ashutosh Modi}, | |
| booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}, | |
| year={2025}, | |
| url={https://openreview.net/forum?id=Jj4NdJtXwp} | |
| } | |
| ``` | |