X-Reasoner-7B-f32-GGUF
Microsoft X-Reasoner-7B is a 7B-parameter vision-language model from Microsoft Research, post-trained solely on general-domain text using a two-stage approach to achieve generalizable reasoning across modalities and domains without vision-specific fine-tuning. Designed for robust performance on multimodal benchmarks like mathematical reasoning, visual question answering, and cross-domain tasks, it leverages instruction tuning and reinforcement learning to produce structured reasoning traces that transfer zero-shot to unseen visual and textual challenges. The model demonstrates strong generalization by outperforming vision-augmented baselines on held-out domains through text-only post-training, making it particularly valuable for research into modality-agnostic reasoning capabilities under an Apache 2.0 license with Transformers deployment support.
LightOnOCR-2-1B [GGUF]
| File Name | Quant Type | File Size | File Link |
|---|---|---|---|
| LightOnOCR-2-1B-BF16.gguf | BF16 | 1.2 GB | Download |
| LightOnOCR-2-1B-F16.gguf | F16 | 1.2 GB | Download |
| LightOnOCR-2-1B-F32.gguf | F32 | 2.39 GB | Download |
| LightOnOCR-2-1B-Q8_0.gguf | Q8_0 | 639 MB | Download |
| LightOnOCR-2-1B.mmproj-BF16.gguf | mmproj-BF16 | 829 MB | Download |
| LightOnOCR-2-1B.mmproj-F16.gguf | mmproj-F16 | 819 MB | Download |
| LightOnOCR-2-1B.mmproj-F32.gguf | mmproj-F32 | 1.64 GB | Download |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 33
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
