X-Reasoner-7B-f32-GGUF

Microsoft X-Reasoner-7B is a 7B-parameter vision-language model from Microsoft Research, post-trained solely on general-domain text using a two-stage approach to achieve generalizable reasoning across modalities and domains without vision-specific fine-tuning. Designed for robust performance on multimodal benchmarks like mathematical reasoning, visual question answering, and cross-domain tasks, it leverages instruction tuning and reinforcement learning to produce structured reasoning traces that transfer zero-shot to unseen visual and textual challenges. The model demonstrates strong generalization by outperforming vision-augmented baselines on held-out domains through text-only post-training, making it particularly valuable for research into modality-agnostic reasoning capabilities under an Apache 2.0 license with Transformers deployment support.

LightOnOCR-2-1B [GGUF]

File Name Quant Type File Size File Link
LightOnOCR-2-1B-BF16.gguf BF16 1.2 GB Download
LightOnOCR-2-1B-F16.gguf F16 1.2 GB Download
LightOnOCR-2-1B-F32.gguf F32 2.39 GB Download
LightOnOCR-2-1B-Q8_0.gguf Q8_0 639 MB Download
LightOnOCR-2-1B.mmproj-BF16.gguf mmproj-BF16 829 MB Download
LightOnOCR-2-1B.mmproj-F16.gguf mmproj-F16 819 MB Download
LightOnOCR-2-1B.mmproj-F32.gguf mmproj-F32 1.64 GB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
33
GGUF
Model size
8B params
Architecture
qwen2vl
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/X-Reasoner-7B-f32-GGUF

Quantized
(3)
this model